pax_global_header 0000666 0000000 0000000 00000000064 14765535576 0014541 g ustar 00root root 0000000 0000000 52 comment=8a2b87193cbf80bc426e584c238aefa581af5fe8
pgbadger-13.1/ 0000775 0000000 0000000 00000000000 14765535576 0013240 5 ustar 00root root 0000000 0000000 pgbadger-13.1/.editorconfig 0000664 0000000 0000000 00000000461 14765535576 0015716 0 ustar 00root root 0000000 0000000 root = true
[*]
charset = utf-8
indent_style = tab
indent_size = 8
# Unix-style newlines
end_of_line = lf
# Remove any whitespace characters preceding newline characters
trim_trailing_whitespace = true
# Newline ending every file
insert_final_newline = true
[*.yml]
indent_style = space
indent_size = 2
pgbadger-13.1/.gitignore 0000664 0000000 0000000 00000000061 14765535576 0015225 0 ustar 00root root 0000000 0000000 blib/
Makefile
MYMETA.json
MYMETA.yml
pm_to_blib
pgbadger-13.1/CONTRIBUTING.md 0000664 0000000 0000000 00000002053 14765535576 0015471 0 ustar 00root root 0000000 0000000 # How to contribute
## Before submitting an issue
1. Upgrade to the latest version of pgBadger and see if the problem remains
2. Look at the [closed issues](https://github.com/darold/pgbadger/issues?state=closed), we may have already answered a similar problem
3. [Read the doc](http://pgbadger.darold.net/documentation.html), it is short and useful
## Coding style
The pgBadger project provides a [.editorconfig](http://editorconfig.org/) file to
setup consistent spacing in files. Please follow it!
## Keep documentation updated
The first pgBadger documentation is `pgbadger --help`. `--help` fills the
SYNOPSIS section in `doc/pgBadger.pod`. The DESCRIPTION section *must* be
written directly in `doc/pgBadger.pod`. `README` is the text formatting of
`doc/pgBadger.pod`.
After updating `doc/pdBadger.pod`, rebuild `README` and `README.md` with the following commands:
```shell
$ perl Makefile.PL && make README
```
When you're done contributing to the docs, commit your changes. Note that you must have `pod2markdown` installed to generate `README.md`.
pgbadger-13.1/ChangeLog 0000664 0000000 0000000 00000450735 14765535576 0015030 0 ustar 00root root 0000000 0000000 2025-03-16 - v13.1
This is a maintenance release of pgBadger that fixes issues reported by
users since last release and adds some new features:
- Add new report about vacuum throughput with a graph about vacuum per
table that consume the more CPU. The table output reports I/O timing
read and write per table as well as the CPU time elapsed on the table.
Thanks to Ales Zeleny for the feature request.
This patch also adds frozen pages and tuples to the Vacuums per Table
report.
- Add --no-fork option for debugging purpose to not fork processes at all.
Thanks to Ales Zeleny for the feature request.
- Add millisecond to the raw csv output. Thanks to Henrietta Dombrovskaya
for the feature request.
- Add log filename to sample reports when multiple file are processed.
Thanks to Adrien Nayrat for the feature request.
Here is the complete list of changes and acknowledgments:
- Fix bind parameters parsing. Thanks to Thomas Kotzian for the patch
- Apply query filter on multi-lines queries. Thanks to Benjamin Jacobs
for the patch
- Update test result for log filename storage changes
- Fix ERROR vs LOG message level in json output. Thanks to Philippe Viegas
for the report.
- Remove import of tmpdir not exported method from File::Temp. Thanks to
kmoradha for the report.
2024-12-08 - v13.0
This is a major release of pgBadger that fixes issues reported by
users since last release and adds some new features:
* Add two new option to be able to redefined inbound of query and session
histogram.
--histogram-query VAL : use custom inbound for query times histogram.
Default inbound in milliseconds:
0,1,5,10,25,50,100,500,1000,10000
--histogram-session VAL : use custom inbound for session times histogram.
Default inbound in milliseconds:
0,500,1000,30000,60000,600000,1800000,3600000,28800000
Thanks to JosefMachytkaNetApp for the feature request.
* Add support of auto_explain plan for csv and json log formats. Thanks
to zxwsbg and to Alexander Rumyantsev for the report.
* Add three LOG message that was not reported as events: unexpected EOF,
incomplete startup packet and detected deadlock while waiting for. Thanks
to dottle for the report.
Backward compatibility issues:
- Change the way LOG level events reported in the Events reports are
stored. Some of them was still reported and counted as errors instead
as LOG level entries. The fix is to stored and report them as EVENTLOG
to differentiate them from queries. This change introduce a backward
compatibility break when pgbadger is used in incremental mode. You will
just have the double behavior during the week of the upgrade.
Thanks to Matti Linnanvuori for the report.
Bug fixes:
- Fix non reported queries generating the most cancellation due to statement_timeout.
- Update regression tests
- Fix formatting of explain plan when extracted from csv log format.
- Fix jsonlog missing autovacuum data reports:
Average Autovacuum Duration, Tuples removed per table
and vacuums by hour in autovacuum activity report.
Thanks to Ales Zeleny for the patch.
- Fix orphan line not associated to the time consuming bind queries.
Thanks to Henrietta Dombrovskaya for the report.
Fix use of uninitialized value in pattern match. Thanks to Junior Dias
for the patch.
- Apply option --csv-separator to raw export to CSV. Default separator
is semicolon (;). Thanks to Henrietta Dombrovskaya for the feature
request.
- Raw csv output: do not add double quote to parameters and application
name if they are empty.
- Add double quotes when queries have a semi colon in raw csv output.
Thanks to Henrietta Dombrovskaya for the report.
2023-12-25 - v12.4
This is a maintenance release of pgBadger that fixes issues reported by
users since last release.
- Fix pgbouncer report with version 1.21. Thanks to Ales Zeleny for the patch.
- Prevent parallelism perl file to be higher than the number of files. Thanks
to maliangzhu for the report.
- Fix regression test broken since v12.3. Thanks to ieshin for the report.
- Fix cases where LOG entries where counted as ERROR log level entries. Thanks
to Matti Linnanvuori for the report.
2023-11-27 - v12.3
This is a maintenance release of pgBadger that fixes issues reported by
users since last release. It also adds some new features:
* Add option --include-pid to only report events related to a session
pid (%p). Can be used multiple time. Thanks to Henrietta Dombrovskaya
for the feature request.
* Add option --include-session to only report events related to the
session id (%c). Can be used multiple time. Thanks to Henrietta Dombrovskaya
for the feature request.
* Add new option --dump-raw-csv to only parse the log and dump the information
into CSV format. No further processing is done, no report is generated.
Thanks to Henrietta Dombrovskaya for the feature request.
Here is the complete list of changes and acknowledgments:
- Update pgFormatter to version 5.5
- Fix end date of parsing with jsonlog format. Thanks to jw1u1 for the report.
- Fix typo in "Sessions per application". Thanks to fairyfar for the patch.
- Fix "INSERT/UPDATE/DELETE Traffic" chart bug. Thanks to fairyfar for the
patch.
- Fix parsing of orphan lines with bind queries. Thanks to youxq for the
report.
- Fix Analyze per table report with new PG versions. Thanks to Jean-Christophe
Arnu for the patch.
- Fix syslog entry parser when the syslog timestamp contains milliseconds.
Thanks to Pavel Rabel for the report.
2023-08-20 - v12.2
This is a maintenance release of pgBadger that fixes issues reported by
users since last release. It also adds two new features:
* Add support for max, avg, min autovacuum duration. Thanks to Francisco
Reinolds for the patch.
* Add support for pgbouncer's average waiting time. Thanks to Francisco
Reinolds for the patch.
Here is the complete list of changes and acknowledgments:
- Fix broken HTML output when application name contains <...>. Thanks to
Fabio Geiss for the report.
- Fix incorrect association of orphan lines when a filter on database was
applied. Thanks to jcasanov for the report.
- Fix logplex prefix parsing.
- Fix logplex orphan lines detection.
- Fix `autovacuum`'s `system usage: CPU: ...` line parsing. Thanks to
Francisco Reinolds for the patch.
- Avoid prepending output directory if output is stdout.
- Standardise Average Query Duration label. Thanks to Francisco Reinolds
for the patch
- Update documentation for new pgbadger options. Thanks to Francisco Reinolds
for the patch.
- Fix case where parsing was not aborted when no file handle can be opened.
Thanks to vp for the report.
- Fix help by adding %p/%t mandatory placeholder log information. Thanks to
Christophe Courtois for the patch.
- Fix --retention parameter. Thanks to Bertrand Bourgier for the patch.
- Fix cleanup output directory removed by commit 0e5c7d5 when HTML output dir
is set. Thanks to Bertrand Bourgier for the report.
- Fix output extension when destination directory contain a character that
need to be escaped in regexp. Thanks to Bertrand Bourgier for the patch.
- Replace calls to POSIX::strftime("%s", ....) by a call to localtime for
Windows port. Thanks to Bertrand Bourgier for the patch.
- Fix html output dir cleanup. Thanks to Bertrand Bourgier for the patch.
- Use https for explain URL by default. Thanks to Philipp Trulson for the
patch.
2023-03-20 - v12.1
This is a maintenance release of pgBadger that fixes issues reported by users
since past six months.
Here is the complete list of changes and acknowledgments:
- Fix parsing of multiline parameters. Thanks to Bekir Niyaz for the report.
- Fix failure to normalize query with ::tsrange. Thanks to Philippe Griboval
for the report.
- Add logical decoding consistent point and start for slot log entries to
the events report.
- Handle other ns + timezone format in timestamp. Thanks to Ronan Dunklau
for the report.
- Fix detection of %m when notation with T is used. Thanks to Ronan Dunklau
for the report.
- Add parsing of CloudNativePG generated logs. Thanks to codrut panea for
the patch.
- Fix unused option --outdir in report generation. Thanks to Frederic Guiet
for the report.
- Update README with last documentation changes. Thanks to Manisankar for
the report.
- Fix a typo in pgbadger examples. Thanks to Shinichi Hashiba for the patch.
2022-09-13 - v12.0
This major release of pgBadger fixes some issues reported by users since
past five months. As usual there is also new features and improvements:
* Remove support to Tsung output.
* Improve pgbadger performances when there are hundred of bind parameters
to replace.
* Remove option -n | --nohighlight which is no more used since upgrade to
pgFormatter 4.
* Use POST method to send auto_explain plan to explain.depesz.com to avoid
GET length parameter limit.
* Apply --exclude-query and --include-query to bind/parse traces.
* Add link to pgBadger report examples to documentation.
Here is the complete list of changes and acknowledgments:
- Fix monthly reports that was failing on "log file ... must exists". Thanks
to Jaume Sabater for the report.
- Fix pgbouncer start parsing debug message when input is stdin. Thanks to
aleszeleny for the report.
- Remove support to Tsung output.
- Drastically improve pgbadger performances for bind parameters
replacement that could make pgbadger run infinitely when there was
hundred of parameters. Thanks to Monty Mobile for the report.
- Fix documentation about pgBadger return codes and also some wrong return
code at some places. Thanks to Jaume Sabater for the report.
- Fix several typo. Thanks to David Gilman for the patch.
- Remove option -n | --nohighlight which is no more used since upgrade to
pgFormatter 4. Thanks to Elena Indrupskaya for the report.
- Lot of pgbadger documentation fixes. Thanks to Elena Indrupskay from
Postgres Pro for the patch.
- Allow half hour in --log-timezone and --timezone, value can be an integer,
ex: 2 or a float, ex: 2.5. Thanks to Mujjamil-K for the feature request.
- Allow use of regexp for --exclude-app and --exclude-client. Thanks to
rdnkrkmz for the feature request.
- Allow use of --explain-url with previous commit and restore the limitation
to explain text format.
- Use POST method to send auto_explain plan to explain.depesz.com to avoid
GET length parameter limit. Thanks to hvisage for the report.
- Apply --exclude-query and --include-query to bind/parse traces. Thanks to
Alec Lazarescu for the report.
- Fix parsing of autovacuum stats from RDS logs. Thanks to David Gilman for
the report.
- Fix passing of log format when parsing remote log. Thanks to spookypeanut
the report.
- Add link to pgBadger report examples to documentation.
- Fix Session per user reports. Thanks to vitalca for the report.
- Fix jsonlog parsing from PG15 ouput
- Fix text-based error/events reporting. Thanks to Michael Banck for the patch
- Fix regexp typo in normalize_error(). Thanks to Michael Banck for the patch.
2022-04-08 - v11.8
This release of pgBadger fix some issues reported by users since past
three months and especially two fixes on new log entries detection in
incremental mode.
* Fix detection of new log entries with timestamp when millisecond (%m) or
epoch (%n) was used in log_line_prefix.
* Fix detection of new log entries in local file when multiprocess was not
used.
Here is the complete list of changes and acknowledgments:
- Full review and simplification of the log file change detection.
- Reports messages "could not (receive|send) data (from|to) client" in the
Events reports. Thanks to Adrien Nayrat for the report.
- Fix parsing issue when the name of a prepared query contain the ':'
character. Thanks to aleszeleny for the report.
- Fix detection of new log entries with timestamp when millisecond (%m) or
epoch (%n). Thanks to aleszeleny for the report.
- Fix detection of new log entries in local file when multiprocess was not
used. Thanks to aleszeleny for the report.
- Fix detection of new log entries in remote files through ssh. Thanks to
Luca Ferrari for the report
- Fix garbage in username of "Connections per user" report. Thanks to
caseyandgina for the report.
- Fix ssh command when using URI, the ssh options was missing. Thanks to Luca
Ferrari for the report.
- Handle queryid %Q placeholder. Thanks to Adrien Nayrat for the patch.
- Fix typo in error sentence. Thanks to Luca Ferrari for the patch
- Report message: "server process was terminated by signal" in the Events
report. Thanks to Avi Vallarapu for the report.
- doc: fix filename for incremental every week command. Thanks to Theophile
Helleboid for the patch.
- t/04_advanced.t: Fix syslog test. Thanks to Christoph Berg for the patch.
2022-01-23 - v11.7
This release of pgBadger fix some issues reported by users since past
five months as well as some improvements:
* Add new option --no-progressbar option to not display it but keep the
other outputs.
* Add new option --day-report that can be used to rebuild an HTML report
over the specified day. Like option --month-report but only for a day.
It requires the incremental output directories and the presence of all
necessary binary data files. The value is date in format: YYYY-MM-DD
* Improve parsing of Heroku logplex and cloudsql json logs.
Here is the complete list of changes and acknowledgments:
- Update contribution guidelines and Makefile.PL to improve consistency,
clarity, and dependencies. Thanks to diffuse for the patch.
- Fix use of last parse file (--last-parsed) with binary mode. Thanks to
wibrt for the report.
- Add regression test for --last-parsed use and fix regression test on
report for temporary files only.
- Fix title for session per host graph. Thanks to Norbert Bede for the
report.
- Fix week number when computing weeks reports when --iso-week-number
and --incremental options was enabled. Thanks to hansgv for the report.
- Add --no-progressbar option to not display it and keep the other outputs.
Thanks to seidlmic for the feature request.
- Prevent too much unknown format line prints in debug mode for multi-line
jsonlog.
- Fix parsing of single line cloudsql json log. Thanks to Thomas Leclaire
for the report.
- Fix temporary files summary with log_temp_files only.
- Print debug message with -v even if -q or --quiet is used.
- Fix autodetection of jsonlog file.
- Fix parsing of cloudsql log file. Thanks to Luc Lamarle for the report.
- Fixes pid extraction in parse_json_input. Thanks to Francois Scala for
the patch.
- Add new option --day-report with value as date in format: YYYY-MM-DD
that can be used to rebuild an HTML report over the specified day.
Thanks to Thomas Leclaire for the feature request.
- Fix query counter in progress bar. Thanks to Guillaume Lelarge for the
report.
- Fix incomplete queries stored for top bind and prepare reports.
- Fix normalization of object identifier, in some case the numbers was
replaced by a ?.
- Fix unformatted normalized queries when there is a comment at beginning.
- Fix multi-line in stderr format when --dbname is used. Thanks to
Guillaume Lelarge for the report.
- Fix not generated reports in incremental mode when --dbname is used.
Thanks to Dudley Perkins for the report.
- Do not die anymore if a binary file is not compatible, switch to next
file. Thanks to Thomas Leclaire for the suggestion.
- Fix Heroku logplex format change in pgbadger parser. Thanks to François
Pietka for the report.
2021-09-04 - v11.6
This release of pgBadger fix some issues reported by users since past
seven months as well as some improvements:
* Add detection of Query Id in log_line_prefix new in PG14. Thanks to
Florent Jardin for the report.
* Add advanced regression tests with db exclusion and the explode
feature. Thanks to MigOps Inc for the patch.
* Apply multiprocess to report generation when --explode is used.
Thanks to MigOps Inc for the patch and Thomas Leclaire for the
feature request.
* Add --iso-week-number in incremental mode, calendar's weeks start
on a Monday and respect the ISO 8601 week number, range 01 to 53,
where week 1 is the first week that has at least 4 days in the new
year. Thanks to Alex Muntada for the feature request.
* Add command line option --keep-comments to not remove comments from
normalized queries. It can be useful if you want to distinguish
between same normalized queries. Thanks to Stefan Corneliu Petrea
for the feature request.
* Skip INFO lines introduced in PostgreSQL log file by third parties
software. Thanks to David Piscitelli for the report.
* Add compatibility with PostgresPro log file including rows number
and size in bytes following the statement duration. Thanks to
panatamann for the report.
* Parse times with T's to allow using the timestamps from journalctl.
Thanks to Graham Christensen for the patch.
* Improve Windows port. Thanks to Bertrand Bourgier for the patches.
Important note:
* Expect that --iso-week-number will be the default in next major
release and that --start-monday option will be removed as the week
will always start a Monday. The possibility to have week reports
start a Sunday will be removed to simplify the code.
Here is the complete list of changes and acknowledgments:
- Fix duplicate of warning message:
"database ... must be vacuumed within ... transactions".
Thank to Christophe Courtois for the report.
- Fix use of uninitialized variable. Thanks to phiresky for the report.
- Improve query id detection, it can be negative, as well as read it
from csvlog.
- Fix case where last file in incremental mode is always parsed even if
it was already done. Thanks to Thomas Leclaire for the report.
- Update syslog format regex to handle where session line indicator
only contains one int vs two ints separated by dash. Thanks to
Timothy Alexander for the patch.
- Fix --exclude-db option to create anyway the related report with json
log. Thanks to MigOps Inc for the patch and Thomas Leclaire for the
report.
- Add regression test about Storable buggy version.
- Fix use of uninitialized value in substitution iterator in incremental
mode during the week report generation. Thanks to Thomas Leclaire,
Michael Vitale, Sumeet Shukla and Stefan Corneliu Petrea for the report.
- Add 'g' option to replace all bind parameters. Thanks to Nicolas Lutic
and Sebastien Lardiere for the patch.
- Documentation improvements. Thanks to Stefan Petrea for the patch.
- Fixes change log time zone calculation. Thanks to Stefan Petrea for
the patch.
- Fix log filter by begin/end time.
- Fix wrong association of orphan lines for multi-line queries with a
filter on database. Thanks to Abhishek Mehta for the report.
- Fix reports in incremental mode when --dbname parameter is partially
ignored with "explode" option (-E). Thanks to lrevest for the report.
- Update javascript resources.
- Fix display of menu before switching to hamburger mode when screen is
reduced. Thanks to Guillaume Lelarge for the report.
- Fix bind parameters values over multiple lines in the log that were
not well supported.
- Apply same fix for previous patch than in pgFormatter.
- Fix an other use of uninitialized value in substitution iterator from
pgFormatter code. Thanks to Christophe Courtois for the report.
- Fix query normalization. Thanks to Jeffrey Beale for the patch.
- Be sure that all statements end with a semicolon when --dump-all-queries
is used. Thanks to Christian for the report.
- Fix typo and init of EOL type with multiple log files.
- Add auto detection of EOL type to fix LAST_PARSED offset when OEL is on
2 bytes (Windows case). Thanks to Bertrand Bourgier for the patch.
- Fix get_day_of_week() port on Windows where strftime %u is not supported.
Thanks to Bertrand Bourgier for the patch.
- Fix Windows port that call pl2bat.bat perl utility to create a corrupted
pgbadger.bat du to the way __DATA__ was read in pgbadger. Thanks to
Bertrand Bourgier for the patch.
- Fix begin/end time filter and add regression test for timestamp filters.
Thanks to Alexis Lahouze and plmayekar for the report.
- Fix use of uninitialized value in pattern match introduced by pgFormatter
update. Thanks to arlt for the report.
2021-02-18 - v11.5
This release of pgBadger fix some issues reported by users since
past three months as well as some improvements:
* Add report about sessions idle time, computed using:
"total sessions time - total queries time / number of sessions
This require that log_connection and log disconnection have been
enabled and that log_min_duration_statement = 0 (all queries logged)
to have a reliable value. This can help to know how much idle time
is lost, and if a pooler transaction mode would be useful.
This report is available in the "Sessions" tab of "Global Stats"
and in the "Sessions" tab of "General Activity" reports (per hour).
* Add anonymization of numeric values, replaced by 4 random digits.
* Update SQL beautifier based on pgFormatter 5.0.
Here is the complete list of changes and acknowledgments:
- Fix parsing of cloudsql multi-line statement. Thanks to Jon Young for the report.
- Add regression test for anonymization.
- Fix anonymization broken by maxlength truncate. Thanks to artl for the report.
- Add anonymization of parameter in time consuming prepare and bind reports. Thanks to arlt for the report.
- Add support to microseconds in logplex log line prefix. Thanks to Ross Gardiner for the report.
- Add report about sessions idle time. Thanks to Guillaume Lelarge for the feature request.
- Complete patch to support multi-line in jsonlog format.
2020-11-24 - v11.4
This release of pgBadger fix some issues reported by users since
past four months. Improve support for PostgreSQL 13 log information
and adds some new features:
* Add full autovacuum information in "Vacuums per table" report for
buffer usage (hits, missed, dirtied), skipped due to pins, skipped
frozen and WAL usage (records, full page images, bytes). In report
"Tuples removed per table" additional autovacuum information are
tuples remaining, tuples not yet removable and pages remaining.
These information are only available on the "Table" tab.
* Add new repartition report about checkpoint starting causes.
* Add detection of application name from connection authorized traces.
Here is the complete list of changes and acknowledgments:
- Fix typo in an error message. Thanks to Vidar Tyldum for the patch.
- Fix Windows port with error: "can not load incompatible binary data".
Thanks to Eric Brawner for the report.
- Fix typo on option --html-outdir in pgbadger usage and documentation.
Thanks to Vidar Tyldum for the patch.
- Fix autodetection of jsonlog/cloudsql format. Thanks to Jon Young
for the report.
- Fix CSV log parsing with PG v13. Thanks to Kanwei Li for the report
and Kaarel Moppel for the patch.
- Fix sort of queries generating the most temporary files report.
Thanks to Sebastien Lardiere for the report.
- Add pgbadger version trace in debug mode.
2020-07-26 - v11.3
This release of pgBadger fix several issues reported by users since
past four months. It also adds some new features and new command line
options:
* Add autodetection of UTC timestamp to avoid applying timezone
for graphs.
* Add support to GCP CloudSQL json log format.
* Add new option --dump-all-queries to use pgBadger to dump all
queries to a text file, no report is generated just the full list
of statements found in the PostgreSQL log. Bind parameters are
inserted into the queries at their respective position.
* Add new option -Q | --query-numbering used to add numbering of
queries to the output when using options --dump-all-queries or
--normalized-only.
* Add new command line option --tempdir to set the directory where
temporary files will be written. Can be useful on system that do
not allow writing to /tmp.
* Add command line option --ssh-port used to set the ssh port if not
default to 22. The URI notation also adds support to ssh port
specification by using the form:
ssh://192.168.1.100:2222//var/log/postgresql-11.log
Here is the complete list of changes and acknowledgments:
- Fix incremental reports for jsonlog/cloudsql log format. Thanks
to Ryan DeShone for the report
- Add autodetection of UTC timestamp to avoid applying autodetected
timezone for graphs. With UTC time the javascript will apply the
local timezone. Thanks to Brett Stauner for the report.
- Fix incremental parsing of journalctl logs doesn't work from the
second run. Thanks to Paweł Koziol for the patch.
- Fix path to resources file when -X and -E are used. Thanks to Ryan
DeShone for the report.
- Fix General Activity report about read/write queries. Thanks to
alexandre-sk5 for the report.
- Add debug message when parallel mode is not use.
- Fix elsif logic in file size detection and extra space introduced
in the journalctl command when the --since option is added. Thanks
to Pawel Koziol for the patch.
- Fix "not a valid file descriptor" error. Thanks to Pawel Koziol
for the report.
- Fix incremental mode with RDS files. Thanks to Ildefonso Camargo,
nodje and John Walsh for the report.
- Add new option -Q | --query-numbering used to add numbering of
queries to the output when using options --dump-all-queries or
--normalized-only. This can be useful to extract multiline queries
in the output file from an external script. Thanks to Shantanu Oak
for the feature request.
- Fix parsing of cloudsql json logs when log_min_duration_statement
is enabled. Thanks to alexandre-sk5 for the report.
- Fix wrong hash key for users in RDS log. Thanks to vosmax for the
report.
- Fix error related to modification of non-creatable array value.
Thanks to John Walsh and Mark Fletcher for the report.
- Add support to GCP CloudSQL json log format, log format (-f) is
jsonlog. Thanks to Thomas Poindessous for the feature request.
- Add new option --dump-all-queries to use pgBadger to dump all
queries to a text file, no report is generated just the full list
of statements found in the PostgreSQL log. Bind parameters are
inserted into the queries at their respective position. There is
not sort on unique queries, all queries are logged. Thanks to
Shantanu Oak for the feature request.
- Add documentation for --dump-all-queries option.
- Fix vacuum report for new PG version. Thanks to Alexey Timanovsky
for the report.
- Add new command line option --no-process-info to disable change of
process title to help identify pgbadger process, some system do
not allow it. Thanks to Akshay2378 for the report.
- Add new command line option --tempdir to set the directory where
temporary files will be written. Default:
File::Spec->tmpdir() || '/tmp'
Can be useful on system that do not allow writing to /tmp. Thanks
to Akshay2378 for the report.
- Fix unsupported compressed filenames with spaces and/or brackets.
Thanks to Alexey Timanovsky for the report.
- Add command line option --ssh-port used to set the ssh port if not
default to 22. The URI notation also adds support to ssh port
specification by using the form:
ssh://192.168.1.100:2222//var/log/postgresql-11.log
Thanks to Augusto Murri for the feature request.
2020-03-11 - v11.2
This release of pgBadger fix several issues reported by users since
past six months. It also adds some new features:
* Add support and autodetection of AWS redshift log format.
* Add support to pgbouncer 1.11 new log format.
* Handle zstd and lz4 compression format
* Allow to fully separate statistics build and HTML report build in
incremental mode without having to read a log file. For example
it is possible to run pgbadger each hours as follow:
pgbadger -I -O "/out-dir/data" --noreport /var/log/postgresql*.log
It just creates the data binary files in "/out-dir/data" then
for example you can make reports each night for the next day in
a separate directory `/out-dir/reports`:
pgbadger -I -l "/out-dir/data/LAST_PARSED" -H "/out-dir/reports" /out-dir/data/2020/02/19/*.bin
This require to set the path to the last parsed information, the
path where HTML reports will be written and the binary data file
of the day.
There is also new command line options:
* Add new command line option --explain-url used to override the url
of the graphical explain tool. Default URL is:
http://explain.depesz.com/?is_public=0&is_anon=0&plan=
If you want to use a local install of PgExplain or an other tool.
pgBadger will add the plan in text format escaped at the end of
the URL.
* Add new option --no-week to instruct pgbadger to not build weekly
reports in incremental mode. Useful if it takes too much time and
resources.
* Add new command line option --command to be able to set a command
that pgBadger will execute to retrieve log entries on stdin.
pgBadger will open a pipe to the command and parse log entries
generated by the command. For example:
pgbadger -f stderr --command 'cat /var/log/postgresql.log'
which is the same as executing pgbadger with the log file directly
as argument. The interest of this option is obvious if you have to
modify the log file on the fly or that log entries are extracted
from a program or generated from a database. For example:
pgbadger -f csv --command 'psql dbname -c "COPY jrn_log TO STDOUT (FORMAT CSV)"'
* Add new command line option --noexplain to prevent pgBadger to
parse and report explain plan written to log by auto_explain
extension. This is useful if you have a PostgreSQL version < 9.0
where pgBadger generate broken reports when there is explain plan
in log.
Backward compatibility:
- By default pgBadger will truncate queries up to 100000 characters.
This arbitrary value and can be adjusted using option --maxlength.
Previous behavior was to not truncate queries but this could
lead in excessive resources usage. Limiting default size is safer
and the size limit might allow no truncate in most cases. However
queries will not be beautified if they exceed 25000 characters.
Here is the complete list of changes and acknowledgments:
- Fix non working --exclude-client option. Thanks to John Walsh
for the report.
- Add regression test for RDS log parsing and --exclude-client.
- Fix progress bar for pgbouncer log file. The "queries" label is
changed in "stats" for pgbouncer log files.
- Add command line option --explain-url used to override the url
of the graphical explain tool. Thanks to Christophe Courtois for
the feature request.
- Add support to pgbouncer 1.11 new log format. Thanks to Dan
Aksenov for the report.
- Handle zstd and lz4 compression format. Thanks to Adrien Nayrat
for the patch.
- Add support and autodetection of AWS redshift log format. Thanks
to Bhuvanesh for the reature request.
- Update documentation about redshift log format.
- Add new option --no-week to instruct pgbadger to not build weekly
reports in incremental mode. Thanks to cleverKermit17 for the
feature request.
- Fix a pattern match on file path that breaks pgBadger on Windows.
- Fix #554 about cyrillic and other encoded statement parameters
that was not reported properly in the HTML report even with custom
charset. The regression was introduced with a fix to the well
known Perl error message "Wide character in print". The patch have
been reverted and a new command line option: --wide-char is
available to recover this behavior. Add this option to your
pgbadger command if you have message "Wide character in print".
Add a regression test with Cyrillic and french encoding. Thanks
to 4815162342lost and yethee for the report.
- Update documentation to inform that lc_messages = 'en_US.UTF-8'
is valid too. Thanks to nodje for the report.
- Update documentation about --maxlength which default truncate size
is 100000 and no more default to no truncate. Thanks to nodje for
the report.
- Fix retention calculation at year overlap. Thanks to Fabio Pereira
for the patch.
- Fix parsing of rds log file format. Thanks to Kadaffy Talavera for
the report.
- Prevent generating empty index file in incremental mode when there
is no new log entries. Thanks to Kadaffy Talavera for the report.
- Fix non up to date documentation. Thanks to Eric Hanson for the
patch.
- Fixes the command line parameter from -no-explain to -noexplain.
Thanks to Indrek Toom for the patch.
- Fall back to default file size when totalsize can not be found.
Thanks to Adrien Nayrat for the patch.
- Fix some dates in examples. Thanks to Greg Clough for the patch.
- Use compressed file extension regexp in remaining test and extract
.bin extension in a separate condition.
- Handle zstd and lz4 compression format. Thanks to Adrien Nayrat
for the patch.
- Fix remaining call of SIGUSR2 on Windows. Thanks to inrap for the
report.
- Fix progress bar with log file of indetermined size.
- Add new command line option --command to be able to set a command
that pgBadger will execute to retrieve log entries on stdin.
Thanks to Justin Pryzby for the feature request.
- Add new command line option --noexplain to prevent pgBadger to
parse and report explain plan written to log by auto_explain
extension. This is useful if you have a PostgreSQL version < 9.0
where pgBadger generate broken reports when there is explain plan
in log. Thanks to Massimo Sala for the feature request.
- Fix RDS log parsing when the prefix is set at command line. Thanks
to Bing Zhao for the report.
- Fix incremental mode with rds log format. Thanks to Bing Zhao for
the report.
- Fix possible rds log parsing. Thanks to James van Lommel and Simon
Dobner for the report.
- Fix statement classification and add regression test. Thanks to
alexanderlaw for the report.
- Fix anonymization of single characters in IN clause. Thanks to
Massimo Sala for the report.
- Fix RDS log parsing for rows without client/user/db information.
Thanks to Konrad for the report.
2019-09-16 - v11.1
This release of pgBadger fix several issues reported by users since
three months. It also adds some new features and reports:
- Add report of top N queries that consume the most time in the
prepare or parse stage.
- Add report of top N queries that consume the most time in the
bind stage.
- Add report of timing for prepare/bind/execute queries parts.
Reported in a new "Duration" tab in Global Stats report. Example:
Total query duration: 6m16s
Prepare/parse total duration: 45s564ms
Bind total duration: 4m46s
Execute total duration: 44s71m
This also fix previous report of "Total query duration" that was
only reporting execute total duration.
- Add support to RDS and CloudWatch log format, they are detected
automatically. You can use -f rds if pgbadger is not able to
auto-detect the log format.
- Add new configuration option --month-report to be able to build
monthly incremental reports.
- Restore support to Windows operating system.
There's also some bugs fixes and features enhancements.
- Add auto-generated Markdown documentation in README.md using tool
pod2markdown. If the command is not present the file will just not
be generated. Thanks to Derek Yang for the patch.
- Translate action WITH into CTE, regression introduced in last release.
- Fix support of Windows Operating System
- Add support to RDS and CloudWatch log format, use -f rds if pgbadger is
not able to auto-detect this log format. Thanks to peruuparkar for the
feature request.
- Fix option -f | --format that was not applied on all files get from the
parameter list where log format auto-detection was failing, the format was
taken from the fist file parsed. Thanks to Levente Birta for the report.
- Update source documentation file to replace reference to pgBadger v7.x
with v11. Thanks to Will Buckner for the patch.
- Limit height display size of top queries to avoid taking the whole page
with huge queries. Thanks to ilias ilisepe1 for the patch.
- Fix overflow of queries and detail in Slowest individual queries.
- Fix SSH URIs for files, directories and wildcards. Thanks to tbussmann for
the patch.
- Fix URI samples in documentation. Thanks to tbussmann for the patch.
- Hide message of use of default out file when --rebuild is used.
- Add extra newline to usage() output to not bread POD documentation at
make time.
- Reapply --exclude-client option description in documentation. Thanks to
Christoph Berg for the report.
2019-06-25 - v11.0
This release of pgBadger adds some major new features and fixes some
issues reported by users since the last four months. New features:
- Regroup cursor related query (DECLARE,CLOSE,FETCH,MOVE) into new
query type CURSOR.
- Add top bind queries that generate the more temporary files.
Require log_connection and log_disconnection be activated.
- Add --exclude-client command line option to be able to exclude log
entries for the specified client ip. Can be used multiple time.
- Allow to use time only in --begin and --end filters.
- Add -H, --html-dir option to be able to set a different path where
HTML report must be written in incremental mode. Binary files stay
on directory defined with -O, --outdir option.
- Add -E | --explode option to explode the main report into one
report per database. Global information not related to a database
are added to the postgres database report.
- Add per database report to incremental mode. In this mode there
will be a sub directory per database with dedicated incremental
reports.
- Add support to Heroku's PostgreSQL logplex format. Log can be
parsed using:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
- When a query is > 10Kb we first limit size of all constant string
parameters to 30 characters and then the query is truncated to 10Kb.
This prevent pgbadger to waste time/hang with very long queries
when inserting bytea for example. The 10Kb limit can be controlled
with the --maxlength command line parameter.
The query is normalized or truncated to maxlength value only after
this first attempt to limit size.
This new release breaks backward compatibility with old binary or JSON
files. This also mean that incremental mode will not be able to read
old binary file. If you want to update pgBadger and keep you old reports
take care to upgrade at start of a new week otherwise weekly report will
be broken. pgBadger will print a warning and just skip the old binary
file.
There's also some bugs fixes and features enhancements.
- Add a warning about version and skip loading incompatible binary file.
- Update code formatter to pgFormatter 4.0.
- Fix pgbadger hang on Windows OS. Thanks to JMLessard for the report.
- Update tools/pgbadger_tools script to be compatible with new binary
file format in pgBadger v11.
- Add top bind queries that generate the more temporary files. This
collect is possible only if log_connection and log_disconnection
are activated in postgresql.conf. Thanks to Ildefonso Camargo for
the feature request.
- Fix auto detection of timezone. Thanks to massimosala for the fix.
- Remove some remaining graph when --nograph is used
- Force use of .txt extension when --normalized-only is used.
- Fix report of auto vacuum/analyze in logplex format. Thanks to
Konrad zichul for the report.
- Fix use of progress bar on Windows operating system. Thanks to
JMLessard for the report.
- Use a `$prefix_vars{'t_time'} to store the log time. Thanks to Luca
Ferrari for the patch.
- Update usage and documentation to remove perl command from pgbadger
invocations. Thanks to Luca Ferrari for the patch.
- Use begin and end with times without date. Thanks to Luca Ferrari
for the patch.
- Added some very minor spelling and grammar fixes to the readme file.
Thanks to ofni yratilim for the patch.
- Fix remote paths using SSH. Thanks to Luca Ferrari for the patch.
- Update regression test to works with new structure introduced with
the per database report feature.
- Fix fractional seconds in all begin and end parameters. Thanks to
Luca Ferrari for the patch.
- Fix documentation URL. Thanks to Kara Mansel for the report.
- Fix parsing of auto_explain.
Add more information about -U option that can be used multiple time.
Thanks to Douglas J Hunley for the report.
- Lot of HTML / CSS report improvements. Thanks to Pierre Giraud for
the patches.
- Update resource file.
- Add regression test for logplex format.
- Add support to Heroku's PostgreSQL logplex format. You should be able
to parse these logs as follow:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
or if you have already saved the output to a file:
pgbadger heroku.log
The logplex format is auto-dectected like any other supported format.
pgBadger understand the following default log_line_prefix:
database = %d connection_source = %r sql_error_code = %e
or simply:
sql_error_code = %e
Let me know if there's any other default log_line_prefix. The prefix
can always be set using the -p | --prefix pgbadger option:
pgbadger --p 'base = %d source = %r sql_state = %e' heroku.log
for example.
Thanks to Anthony Sosso for the feature request.
- Fix pgbadger help on URI use.
- Fix broken wildcard use in ssh URI introduced in previous patch.
Thanks to Tobias Bussmann for the report.
- Allow URI with space in path to log file. Thanks to Tobias Bussmann
for the report.
- Fix URI samples in documentation. Thanks to Tobias Bussmann for the
patch.
- Fix t/02_basics.t to don't fail if syslog test takes more than 10s.
Thanks to Christoph Berg for the patch.
2019-02-14 - v10.3
This release of pgBadger is a maintenance release that fixes some
log format autodetection issues another pgBouncer log parsing issue
reported by users. There is also a new feature:
The -o | --outfile option can now be used multiple time to dump
output in several format in a single command. For example:
pgbadger -o out.html -o out.json /log/pgsql-11.log
will create two reports in html and json format saved in the
two corresponding files.
There's also some bugs fixes and features enhancements.
- Fix statistics reports when there a filter on database, user,
client or application is requested. Some queries was not
reported.
- Fix autodetection of pg>=10 defauilt log line prefix.
- Fix autodetection of log file with "non standard" log line prefix.
If --prefix specify %t, %m, %n and %p or %c, set format to stderr.
Thanks to Alex Danvy for the report.
- Remove extra space at end of line.
- Add minimal test to syslog parser.
- Fix a call to autodetect_format().
- Truncate statement when maxlength is used. Thanks to Thibaud
Madelaine for the patch.
- Add test for multiple output format.
- The -o | --outfile option can now be used multiple time to dump
output in several format in a single command. For example:
pgbadger -o out.txt -o out.html -o - -x json /log/pgsql-11.log
Here pgbadger will create two reports in text and html format
saved in the two corresponding file. It will also output a json
report on standard output. Thanks to Nikolay for the feature
request.
- Move detection of output format and setting of out filename into
a dedicated function set_output_extension().
- Fix another pgBouncer log parsing issue. Thanks to Douglas J.
Hunley for the report.
2018-12-27 - v10.2
This release of pgBadger is a maintenance release that fixes issues
reported by users during last three months. There is also some new
features:
* Add support to pgbouncer 1.8 Stats log format.
* Auto adjust javascript graph timezone.
There is a new command line option:
* Add --exclude-db option to compute report about everything except
the specified database.
* Add support to http or ftp remote PostgreSQL log file download.
The log file is parsed during the download using curl command
and never saved to disk. With ssh remote log parsing you can use
uri as command line argument to specify the PostgreSQL log file.
ssh://localhost/postgresql-10-main.log
http://localhost/postgresql-10-main.log.gz
ftp://localhost/postgresql-10-main.log
with http and ftp protocol you need to specify the log file format
at end of the uri:
http://localhost/postgresql-10-main.log:stderr
You can specify multiple uri for log files to be parsed. This is
useful when you have pgbouncer log file on a remote host and
PostgreSQL logs in the local host.
With ssh protocol you can use wild card too like with remote
mode, ex: ssh://localhost/postgresql-10-main.log*
Old syntax to parse remote log file using -r option is still
working but is obsolete and might be removed in future versions.
There's also some bugs fixes and features enhancements.
- Adjust end of progress bar with files with estimate size (bz2
compressed files and remote compressed files.
- Update year in copyright.
- Add information about URI notation to parse remote log files.
- Force progress to reach 100% at end of parsing of compressed
remote file.
- Extract information about PL/pgSQL function call in queries of
temporary file reports. The information is append to the details
display block.
- Fix progress bar with csv files.
- Fix reading binary file as input file instead of log file.
- Encode html output of queries into UTF8 to avoid message "Wide
character in print". Thanks to Colin 't Hart for the report.
- Add Checkpoints distance key/value for distance peak.
- Fix pgbouncer parsing and request throughput reports. Thanks
to Levente Birta for the report.
- Fix use of csvlog instead of csv for input format.
- Add support to pgbouncer 1.8 Stats log format. Thanks to Levente
Birta for the report.
- Add warning about parallel processing disabled with csvlog. Thanks
to cstdenis for the report.
- Add information in usage output about single process forcing with
csvlog format in -j and -J options. Thanks to cstdenis for the
report.
- Fix unknown line format error for multi line log while incremental
analysis over ssh. Thanks to Wooyoung Cho for the report.
- Add -k (--insecure) option to curl command to be able to download
logs from server using a self signed certificate.
- Auto adjust javascript graph timezone. Thanks to Massimino Sala
for the feature request.
- Add support to HTTP logfile download by pgBadger, for example:
/usr/bin/pgbadger http://www.mydom.com/postgresql-10.log
- Will parse the file during download using curl command.
- Fix documentation. Thanks to 0xflotus for the patch.
- Reapply fix on missing replacement of bind parameters after some
extra code cleaning. Thanks to Bernhard J. M. Grun for the report.
- Add --exclude-db option to compute report about everything except
the specified database. The option can be used multiple time.
2018-09-12 - v10.1
This release of pgBadger is a maintenance release that fixes reports
in incremental mode and multiprocess with -j option. Log parsing from
standard input was also broken. If you are using v10.0 please upgrade
now.
- Add test on pgbouncer log parser.
- Some little performances improvment.
- Fix not a valid file descriptor at pgbadger line 12314.
- Fix unwanted newline in progressbar at startup.
- Remove circleci files from the project.
- Remove dependency of bats and jq for the test suite, they are
replaced with Test::Simple and JSON::XS.
- Add more tests especially for incremental mode and input from
stdin that was broken in release 10.0.
- Sync pgbadger, pod, and README, and fix some syntax errors.
Thanks to Christoph Berg for the patch.
- Add documentation on how to install Perl module JSON::XS from
apt and yum repositories.
- Fix URI for CSS in incremental mode. Thanks to Floris van Nee
for the report.
- Fix fatal error when looking for log from STDIN. Thanks to
Jacek Szpot for the report.
- Fixes SED use for OSX builds. Thanks to Steve Newson for the
patch.
- Fix illegal division by zero in incrental mode. Thanks to
aleszeleny for the report.
- Replace SQL::Beautify with v3.1 of pgFormatter::Beautify.
2018-09-09 - v10.0
This release of pgBadger is a major release that adds some new
features and fix all issues reported by users since last release.
* Add support of pgbouncer syslog log file format.
* Add support to all auto_explain format (text, xml, json and yaml).
* Add support to %q placeholder in log_line_prefix.
* Add jsonlog format of Michael Paquier extension, with -f jsonlog
pgbadger will be able to parse the log.
* Replace the SQL formatter/beautify with v3.0 of pgFormatter.
There is some new command line option:
- Add --prettify-json command line option to prettify JSON output.
- Add --log-timezone +/-XX command line option to set the number
of hours from GMT of the timezone that must be used to adjust
date/time read from log file before beeing parsed. Note that you
might still need to adjust the graph timezone using -Z when the
client has not the same timezone.
- Add --include-time option to add the ability to choose times that
you want to see, instead of excluding all the times you do not
want to see (--exclude-time).
The pgBadger project and copyrights has been transfered from Dalibo
to the author and official maintainer of the project. Please update
your links:
- Web site: http://pgbadger.darold.net/
- Source code: https://github.com/darold/pgbadger
I want to thanks the great guys at Dalibo for all their investments
into pgBadger during these years and especially Damien Clochard and
Jean-paul argudo for their help to promote pgBadger.
- Fix checkpoint distance and estimate not reported in incremental
mode. Thanks to aleszeleny for the report.
- Fix title of pgbouncer simultaneous session report. Thansks to
Jehan Guillaume De Rorthais for the report.
- Add support of pgbouncer syslog log file format. Thanks to djester
for the feature request.
- Fix error when a remote log is empty. Thanks to Parasit Hendersson
for the report.
- Fix test with binary format. Binary file must be generated as it
is dependent of the plateform. Thanks to Michal Nowak for the
report.
- Fix case where an empty explain plan is generated.
- Fix parsing of autodetected default format with a prefix in
command line.
- Remove dependency of git command in Makefile.PL.
- Update documentation about options changes and remove of the
[%l-1] part of the mandatory prefix.
- Fix parsing of vacuum / analyze system usage for PostgreSQL 10.
Thanks to Achilleas Mantzios for the patch.
- Fix Temporary File Activity table.
- Remove dependency to git during install.
- Add --log-timezone +/-XX command line option to set the number
of hours from GMT of the timezone that must be used to adjust
date/time read from log file before beeing parsed. Using this
option make more difficult log search with a date/time because the
time will not be the same in the log. Note that you might still
need to adjust the graph timezone using -Z when the client has not
the same timezone. Thanks to xdexter for the feature request and
Julien Tachoire for the patch.
- Add support to auto_explain json output format. Thanks to dmius
for the report.
- Fix auto_explain parser and queries that was counted twice.
Thanks to zam6ak for the report.
- Fix checkpoint regex to match PostgreSQL 10 log messages. Thanks
to Edmund Horner for the patch.
- Update description of -f | --format option by adding information
about jsonlog format.
- Fix query normalisation to not duplicate with bind queries.
Normalisation of values are now tranformed into a single ? and no
more 0 for numbers, two single quote for string. Thanks to vadv
for the report.
- Fix log level count. Thanks to Jean-Christophe Arnu for the report
- Make pgbadger more compliant with B::Lint bare sub name.
- Made perlcritic happy.
- Add --prettify-json command line option to prettify JSON output.
Default output is all in single line.
- Fix Events distribution report.
- Fix bug with --prefix when log_line_prefix contain multiple %%.
Thanks to svb007 for the report.
- Add --log-timezone +/-XX command line option to set the number
of hours from GMT of the timezone that must be used to adjust
date/time read from log file before beeing parsed. Using this
option make more difficult log search with a date/time because the
time will not be the same in the log. Note that you might still
need to adjust the graph timezone using -Z when the client has not
the same timezone. Thanks to xdexter for the feature request.
- Remove INDEXES from the keyword list and add BUFFERS to this list.
- Fix normalization of query using cursors.
- Remove Dockerfile and documentation about docker run. pgBadger
comes as a single Perl script without any dependence and it can
be used on any plateform. It is a non sens to use docker to run
pgbadger, if you don't want to install anything, just copy the
file pgbadger where you want and execute it.
- Fix broken grid when no temp files activity. Thanks to Pierre
Giraud for the patch
- Add doc warning about log_in_duration_statement vs log_duration +
log_statement. Thanks to Julien Tachoire for the patch.
- Apply timezone offset to bar charts. Thanks to Julien Tachoire
for the patch.
- Delete current temp file info if we meet an error for the same PID
Thanks to Julien Tachoire for the patch.
- Consistently use app= in examples, and support appname=
Some of the usage examples used appname= in the prefix, but the
code didn't recognize that token. Use app= in all examples, and
add appname= to the prefix parser. Thanks to Christoph Berg for
the patch
- Fix wrong long name for option -J that should be --Jobs intead
of --job_per_file. Thanks to Chad Trabant for the report and
Etienne Bersac for the patch.
- Ignore blib files. Thanks to Etienne Bersac for the patch.
- Add consistency tests. Thanks to damien clochard for the patch.
- doc update : stderr is not a default for -f. Thanks to Christophe
Courtois for the patch.
- Always update pod and README. Thanks to Etienne Bersac for
the patch.
- Add some regression tests. Thanks to Etienne Bersac for the patch.
- Add editorconfig configuration. Thanks to Etienne Bersac for the
patch.
- Drop vi temp files from gitignore. Thanks to Etienne Bersac for
the patch.
- Add --include-time option to add the ability to choose times that
you want to see, instead of excluding all the times you do not
want to see. This is handy when wanting to view only one or two
days from a week's worth of logs (simplifies down from multiple
--exlucde-time options to one --include-time). Thanks to Wesley
Bowman for the patch.
- Check pod syntax. Thanks to Etienne Bersac for the patch.
- Add HACKING to document tests. Thanks to Etienne Bersac for the
patch.
- Drop obsolete --bar-graph option. Thanks to Etienne Bersac for
the patch.
- Drop misleading .perltidyrc. This file date from 2012 and
pgbadger code is far from compliant. perltidy unified diff is
10k lines. Let's drop this. Thanks to Etienne Bersac for the
patch.
- Fix use of uninitialized value in SQL formatting. Thanks to John
Krugger for the report and Jean-paul Argudo for the report.
2017-07-27 - v9.2
This release of pgBadger is a maintenance release that adds some new
features.
* Add report of checkpoint distance and estimate.
* Add support of AWS Redshift keywords to SQL code beautifier.
* Add autodetection of log format in remote mode to allow remote
parsing of pgbouncer log file together with PostgreSQL log file.
There's also some bugs fixes and features enhancements.
- Fix reports with histogram that was not showing data upper than
the last range.
- Fix parsing of journalctl without the the log line number pattern
([%l-n]). Thanks to Christian Schmitt for the report.
- Add report of checkpoint distance and estimate. Thanks to jjsantam
for the feature request.
- Append more information on what is done by script to update CSS
and javascript files, tools/updt_embedded_rsc.pl.
- Do not warn when all log files are empty and exit with code 0.
- Fix build_log_line_prefix_regex() that does not include %n as a
lookup in %regex_map. Thanks to ghosthound for the patch.
- Change error level of "FATAL: cannot use CSV" to WARNING. Thanks
to kong1man for the report.
- Fix use of uninitialized value warning. Thanks to Payal for the
report.
- Add permission denied to error normalization
- Update pgbadger to latest commit 5bdc018 of pgFormatter.
- Add support for AWS Redshift keywords. Thanks to cavanaug for the
feature request.
- Fix missing query in temporary file report when the query was
canceled. Thanks to Fabrizio de Royes Mello for the report.
- Normalize query with binded parameters, replaced with a ?.
- Sanity check to avoid end time before start time. Thanks to
Christophe Courtois for the patch.
- Fix a lot of mystyped words and do some grammatical fixes. Use
'pgBadger' where it refers to the program and not the binary file.
Also, use "official" expressions such as PgBouncer, GitHub, and
CSS. POD file was synced with README. Thanks to Euler Taveira for
the patch.
- Menu is broken when --disable-type top_cancelled_info test and
closing list must be inside disable_type test. While in it, ident
disable_lock test. Thanks to Euler Taveira for the patch.
- Fix use of uninitialized value. Thanks to johnkrugger for the
report.
- Remove test to read log file during log format auto-detection when
the file is hosted remotly. Thanks to clomdd for the report.
- Add autodetection of log format in remote mode to allow remote
parsing of pgbouncer log file together with PostgreSQL log file.
- Fix number of sessions wrongly increased after log line validation
Thanks to Achilleas Mantzios for the report.
- Minor reformatting of the pgBadger Description.
- Fix repeated info in documentation. Thanks to cscatolini for the patch.
2017-01-24 - v9.1
This release of pgBadger is a maintenance release that adds some new
features.
* Add report of error class distribution when SQLState is available
in the log_line_prefix (see %e placeholder).
* Update SQL Beautifier to pgFormatter v1.6 code.
* Improve error message normalization.
* Add --normalized-only option to generate a text file containing all
normalized queries found in a log with count.
* Allow %c (session id) to replace %p (pid) as unique session id.
* Add waiting for lock messages to event reports.
* Add --start-monday option to start calendar weeks in Monday
instead of default to Sunday.
There's also some bugs fixes and features enhancements.
- Add report of error class distribution when SQLState is available
in the log line prefix. Thanks to jacks33 for the feature request.
- Fix incremental global index on resize. Thanks to clomdd for the
report.
- Fix command tag log_line_prefix placeholder %i to allow space
character.
- Fix --exclude-line options and removing of obsolete directory
when retention is enabled and --noreport is used.
- Fix typo in "vacuum activity table". Thanks to Nicolas Gollet for
the patch.
- Fix autovacuum report. Thanks to Nicolas Gollet for the patch.
- Fix author of pgbadger's logo - Damien Cazeils and English in
comments. Thanks to Thibaut Madelaine for the patch.
- Add information about pgbouncer log format in the -f option.
Thanks to clomdd for the report.
- Add --normalized-only information in documentation.
- Fix broken report of date-time introduced in previous patch.
- Fix duration/query association when log_duration=on and
log_statement=all. Thanks to Eric Jensen for the report.
- Fix normalization of messages about advisory lock. Thanks to
Thibaut Madelaine for the report.
- Fix report of auto_explain output. Thanks to fch77700 for the
report.
- Fix unwanted log format auto detection with log entry from stdin.
Thanks to Jesus Adolfo Parra for the report.
- Add left open parentheses to the "stop" chars of regex to look
for db client in the prefix to handle the PostgreSQL client
string format that includes source port. Thanks to Jon Nelson
for the patch.
- Fix some spelling errors. Thanks to Jon Nelson for the patch.
- Allow %c (session id) to replace %p (pid) as unique session id.
Thanks to Jerryliuk for the report.
- Allow pgbadger to parse default log_line_prefix that will be
probably used in 10.0: '%m [%p] '
- Fix missing first line with interpreter call.
- Fix missing Avg values in CSV report. Thanks to Yosuke Tomita
for the report.
- Fix error message in autodetect_format() method.
- Add --start-monday option to start calendar weeks in Monday
instead of default to Sunday. Thanks to Joosep Mae for the feature
request.
- Fix --histo-average option. Thanks to Yves Martin for the report.
- Remove plural form of --ssh-option in documentation. Thanks to
mark-a-s for the report.
- Fix --exclude-time filter and rewrite code to skip unwanted line
as well code to update the progress bar. Thanks to Michael
Chesterton for the report.
- Fix support to %r placeholder in prefix instead of %h.
2016-09-02 - v9.0
This major release of pgBadger is a port to bootstrap 3 and a version
upgrade of all resources files (CSS and Javascript). There's also some
bugs fixes and features enhancements.
Backward compatibility with old incremental report might be preserved.
- Sources and licences of resources files are now on a dedicated
subdirectory. A script to update their minified version embedded
in pgbager script has been added. Thanks to Christoph Berg for
the help and feature request.
- Try to detect user/database/host from connection strings if
log_connection is enabled and log_line_prefix doesn't include
them.
Extend the regex to autodetect database name, user name, client
ip address and application name. The regex now are the following:
db => qr/(?:db|database)=([^,]*)/;
user => qr/(?:user|usr)=([^,]*)/;
client => qr/(?:client|remote|ip|host)=([^,]*)/;
appname => qr/(?:app|application)=([^,]*)/;
- Add backward compatibility with older version of pgbadger in
incremental mode by creating a subdirectory for new CSS and
Javascript files. This subdirectory is named with the major
version number of pgbadger.
- Increase the size of the pgbadger logo that appears too small
with the new font size.
- Normalize detailed information in all reports.
- Fix duplicate copy icon in locks report.
- Fix missing chart on histogram of session time. Thanks to
Guillaume Lelarge for the report.
- Add LICENSE file noting the licenses used by the resource
files. Thanks to Christoph Berg for the patch.
- Add patch to jqplot library to fix an infinite loop when trying
to download some charts. Thanks to Julien Tachoires for the help
to solve this issue.
- Script tools/updt_embedded_rsc.pl will apply the patch to resource
file resources/jquery.jqplot.js and doesn't complain if it has
already been applied.
- Remove single last comma at end of pie chart dataset. Thanks to
Julien Tachoires for the report.
- Change display of normalized error
- Remove unused or auto-generated files
- Update all resources files (js+css) and create a directory to
include source of javascript libraries used in pgbadger. There is
also a new script tools/updt_embedded_rsc.pl the can be used to
generate the minified version of those files and embedded them
into pgbadger. This script will also embedded the FontAwesome.otf
open truetype font into the fontawesome.css file.
2016-08-27 - v8.3
This is a maintenance release that fix some minor bugs. This release
also adds replication command messages statistics to the Events
reports.
- Fix auto-detection of stderr format with timestamp as epoch (%n).
- Fix histogram over multiples days to be cumulative per hour, not
an average of the number of event per day.
- Fix parsing of remote file that was failing when the file does
not exists locally. Thanks to clomdd for the report.
- Detect timezones like GMT+3 on CSV logs. Thanks to jacksonfoz
for the patch.
- Add replication command messages statistics to the Events
reports. Thanks to Michael Paquier for the feature request.
This is the last minor version of the 8.x series, next major version
will include an upgrade of boostrap and jquery library which need
some major rewrite.
2016-08-11 version 8.2
This is a maintenance release that fix some minor bug. There is also
some performances improvement up to 20% on huge files and some new
interesting features:
* Multiprocessing can be used with pgbouncer log files.
* pgBouncer and PostgreSQL log files can be used together in
incremental mode.
* With default or same prefix, stderr and syslog file can be
parsed together, csvlog format can always be used.
* Use a modal dialog window to download graphs as png images.
* Add pl/pgSQL function information to queries when available.
Here are the complete list of changes:
- Fix report of database system messages.
- Fix multi line statement concatenation after an error.
- Fix box size for report of queries generating the most
temporary files and the most waiting queries.
- Rewrite code to better handle multi-line queries.
- Fix garbage in examples of event queries with error only mode
(option -w). Thanks to Thomas Reiss for the report.
- Fix getting dataset related to query duration with the use of
auto_explain. Thanks to tom__b for the patch.
- Use a modal dialog window to download graphs as png images.
- Huge rewrite of the incremental mechanism applied to log files
to handle PostgreSQL and pgbouncer logs at the same time.
- Multiprocess can be used with pgbouncer log.
- Add code to remove remaining keyword placeholders tags.
- Fix an other possible case of truncated date in LAST_PARSED file
Thanks to brafaeloliveira for the report.
- Set default scale 1 in pretty_print_number() js function.
- Fix auto-detection of pgbouncer files that contain only stats
lines. Thanks to Glyn Astill for the patch.
- Add date to samples of queries generating most temporary files.
- Do not display warning message of empty log when quiet mode is
enable.
- Fix reading from stdin by disabling pgbouncer format detection.
Thanks to Robert Vargason for the patch.
- Fix case of duplicate normalized error message with "nonstandard
use of ...".
- Fix storage of current temporary file related request.
- Use the mnemonic rather than a signal number in kill calls.
Thanks to Komeda Shinji for the patch.
2016-04-21 version 8.1
This is a maintenance release that fix a major issue introduced with
support to pgbouncer that prevent parsing of compressed PostgreSQL
log files and adds some improvements.
Here are the complete list of changes:
- Fix one case where pid file remain after dying.
- Add requirement of log_error_verbosity = default to documentation.
- Report message "LOG: using stale statistics instead of current
ones because stats collector is not responding" in events view.
- Remove obsolete days when we are in binary mode with --noreport
- Fix wrong report of statements responsible of temporary files.
Thanks to Luan Nicolini Marcondes for the report. This patch also
exclude line with log level LOCATION to be parsed.
- Fix limit on number of sample at report generation and remove
pending LAST_PARSED.tmp file.
- Update load_stat() function and global variables to support
pgbouncer statistics. Update version to 2.0.
- Handle more kind or query types. Thanks to julien Rouhaud for
the patch.
- Fix pgbouncer log parser to handle message: FATAL: the database
system is shutting down
- Fix whitespace placed in between the E and the quote character.
Thanks to clijunky for the report.
- Fix a major issue introduced with support to pgbouncer that
prevent parsing of compressed PostgreSQL log files. Thanks to
Levente Birta for the report.
2016-02-22 version 8.0
This is a major release that adds support to pgbouncer log files.
New pgbouncer reports are:
* Request Throughput
* Bytes I/O Throughput
* Average Query Duration
* Simultaneous sessions
* Histogram of sessions times
* Sessions per database
* Sessions per user
* Sessions per host
* Established connections
* Connections per database
* Connections per user
* Connections per host
* Most used reserved pools
* Most Frequent Errors/Events
pgbouncer log files can be parsed together with PostgreSQL logs.
It also adds a two new command line options:
* --pgbouncer-only to only show pgbouncer related reports.
* --rebuild to be able to rebuild all html reports in incremental
output directory where binary data files are still available.
This release fixes a major bug introduced with journalctl code that
was prevented the use of multiprocess feature.
Here the complete list of other changes:
- Fix progress bar with pgbouncer (only events are increased).
- Sort %SYMBOLE hashtable for remove "!=" / "=" bug. Thanks to
Nicolas Gollet for the patch.
- Fix incorrect numbers on positional parameters in report Queries
generating most temporary files. Thanks to Oskar Wiksten for the
report.
- Update operators list in SQL code beautifier with last update in
pgFormatter. Thanks to Laurenz Albe for the report and the list
of missing operators.
- Cosmetic change to code and add some more debug information.
2016-01-18 version 7.3
This is a maintenance release to fix a major bug that was breaking
the incremental mode in pgBadger. It also adds some more reports and
features.
* Add --timezone=+/-HH to control the timezone used in charts. The
javascript library runs at client side so the timezone used is
the browser timezone so the displayed time in the charts can be
different from the time in the log file.
* Add /tmp/pgbadger.pid file to prevent cron jobs overlaping on
same log files.
* Add command line option --pid-dir to be able to run two pgbadger
at the same time by setting an alternate path to the pid file.
* Report information about "LOG: skipping analyze of ..." into
events reports.
* Report message "LOG: sending cancel to blocking autovacuum" into
events reports. Useful to look for queries generating autovacuum
kill on account of a lock issue.
Here the complete list of changes:
- Automatically remove obsolete pid file when there is no other
pgbadger process running (unix only)
- Update documentation about the --timezone command line option.
- Add --timezone=+/-HH to control the timezone used in charts.
Thanks to CZAirwolf for the report.
- Fix Histogram of session times when there is no data.
- Fix unclosed test file.
- Fix an other case where pgbadger.pid was not removed.
- Always display slides part on connections report even if there
is no data.
- Fix some label on sessions reports
- Add remove of pid file at normal ending.
- Fix wrong size/offset of log files that was breaking incremental
mode. Thanks a lot to CZAirwolf for the report and the help to
find the problem.
- Add command line option --pid-dir to be able to run two pgbadger
at the same time by setting an alternate path to the directory
where the pid file will be written.
- Add /tmp/pgbadger.pid file to prevent cron jobs overlaping on same
log files.
- Report information about "LOG: skipping analyze of ..." into
events reports.
- Report message "LOG: sending cancel to blocking autovacuum" into
events reports. Usefull to know which queries generate autovacuum
kill on account of a lock issue.
- Add more debug information about check log parsing decision.
2016-01-05 version 7.2
This new release fixes some issues especially in temporary files
reports and adds some features.
* Allow pgBadger to parse natively the journalctl command output
* Add new keywords from PG 9.5 for code formating
* Add support to %n log_line_prefix option for Unix epoch (PG 9.6)
There's also some new command line option:
* Adds --journalctl_cmd option to enable this functionality and
set the command. Typically:
--journalctl_cmd "journalctl -u postgresql-9.4"
to parse output of PG 9.4 log
Here is the full list of changes/fixes:
- Fix missing detailed information (date, db, etc.) in Queries
generating the largest temporary files report.
- Fix label of sessions histogram. Thanks to Guillaume Lelarge
for the patch.
- Fix to handle cancelled query that generate more than one
temporary file and more generally aggregate size on queries with
multiple (> 1GB) temporary files.
- Add "Total size" column in Temporary Files Activity table and
fix size increment when a query have multiple 1GB temporary file.
- Fix temporary file query normalization and examples.
- Fix incomplete and wrong queries associated to temporary files
when STATEMENT level log line was missing. Thanks to Mael
Rimbault for the report.
- When -w or --watch-mode is used, message "canceling statement
due to statement timeout" s now reported with other errors.
- Allow dot in dbname and user name. Thanks to David Turvey for
the patch.
- Remove use of unmaintained flotr2 javascript chart library and
use of jqflot instead.
- Fix bad formatting with anonymized values in queries.
- Display 0ms instead of 0s when qery time is under the millisecond.
Thanks to venkatabn for the report.
- Normalize cursor names. Patch from Julien Rouhaud
- Fix unregistered client host name with default pattern. Thanks to
Eric S. Lucinger Ruiz for the report.
- Remove redundant regular expressions.
- Tweaking awkward phrasing, correcting subject-verb agreements,
typos, and misspellings. Patch from Josh Kupershmid.
- Fix potential incorrect creation of subdirectory in incremental mode.
- Allow single white space after duration even if this should not appear.
- Update copyright.
2015-07-11 version 7.1
This new release fixes some issues and adds a new report:
* Distribution of sessions per application
It also adds Json operators to SQL Beautifier.
Here is the full list of changes/fixes:
- Fix unwanted seek on old parsing position when log entry is stdin.
Thanks to Olivier Schiavo for the report.
- Try to fix a potential issue in log start/end date parsing. Thanks
to gityerhubhere for the report.
- Fix broken queries with multiline in bind parameters. Thank to
Nicolas Thauvin for the report.
- Add new report Sessions per application. Thanks to Keith Fiske for
the feature request.
- Add Json Operators to SQL Beautifier. Thanks to Tom Burnett and
Hubert depesz Lubaczewski.
- Makefile.PL: changed manpage section from '1' to '1p', fixes #237.
Thanks to Cyril Bouthors for the patch.
- Update Copyright date-range and installation instructions that was
still refering to version 5. Thanks to Steve Crawford for the report.
- Fix typo in changelog
Note that new official releases must now be downloaded from GitHub and no more
from SourceForge. Download at https://github.com/dalibo/pgbadger/releases
2015-05-08 version 7.0
This major release adds some more useful reports and features.
* New report about events distribution per 5 minutes.
* New per application details (total duration and times executed) for each
query reported in Top Queries reports. The details are visible from a new
button called "App(s) involved".
* Add support to auto_explain extension. EXPLAIN plan will be added together
with top slowest queries when available in log file.
* Add a link to automatically open the explain plan on http://explain.depesz.com/
* New report on queries cumulated durations per user.
* New report about the Number of cancelled queries (graph)
* New report about Queries generating the most cancellation (N)
* New report about Queries most cancelled.
Here is the full list of changes/fixes:
- Update documentation with last reports.
- Fix number of event samples displayed in event reports.
- Add new report about events distribution per x minutes.
- Add app=%a default prefix to documentation.
- Add reports of "App(s) involved" with top queries. Thanks to Antti Koivisto
for the feature request.
- Remove newline between a ) and , in the beautifier.
- Add link to automatically open the explain plan on http://explain.depesz.com/
- Add support to auto_explain, EXPLAIN plan will be added together with top
slowest queries when available in the log file.
- Add a graph on distributed duration per user. Thanks to Korriliam for the
patch.
- Add tree new report: Number of cancelled queries (graph), Queries generating
the most cancellation (N) and Queries most cancelled lists. Thanks to Thomas
Reiss for the feature request.
- Fix case where temporary file statement must be retrieved from the previous
LOG statement and not in the following STATEMENT log entry. Thanks to Mael
Rimbault for the report.
- Add --enable-checksum to show a md5 hash of each reported queries. Thanks
to Thomas Reiss for the feature request.
2015-04-13 version 6.4
This new release fixes a major bugs in SQL beautifier which removed operator
and adds some useful improvement in anonymization of parameters values.
pgBadger will also try to parse the full csvlog when a broken CSV line is
encountered.
- Make anonymization more useful. Thanks to Hubert depesz Lubaczewski
for the patch.
- Fix previous patch for csvlog generated with a PostgreSQL version
before 9.0.
- Try continue CSV parsing after broken CSV line. Thanks to Sergey
Burladyan for the patch.
- Fix bug in SQL beautifier which removed operator. Thanks to Thomas
Reiss for the report.
- Fix loop exit, check terminate quickly and correct comments
indentation. Thanks to Sergey Burladyan for the patch
Please upgrade.
2015-03-27 version 6.3
This new release fixes some bugs and adds some new reports:
* A new per user details (total duration and times executed) for each query
reported in Top Queries reports. The details are visible from a new button
called "User(s) involved".
* Add "Average queries per session" and "Average queries duration per session"
in Sessions tab of the Global statistics.
* Add connection time histogram.
* Use bar graph for Histogram of query times and sessions times.
There's also some cool new features and options:
* Add -L | --logfile-list option to read a list of logfiles from an external
file.
* Add support to log_timezones with + and - signs for timestamp with
milliseconds (%m).
* Add --noreport option to instruct pgbadger to not build any HTML reports
in incremental mode. pgBadger will only create binary files.
* Add auto detection of client=%h or remote=%h from the log so that adding
a prefix is not needed when it respect the default of pgbadger.
* Redefine sessions duration histogram bound to be more accurate.
* Add new option -M | --no-multiline to not collect multi-line statement
and avoid storing and reporting garbage when needed.
* Add --log-duration option to force pgbadger to associate log entries
generated by both log_duration=on and log_statement=all.
The pgbadger_tools script have also been improve with new features:
* Add a new tool to pgbadger_tool to output top queries in CSV format for
follow-up analysis.
* Add --explain-time-consuming and --explain-normalized options to generate
explain statement about top time consuming and top normalized slowest
queries.
Here is the full list of changes/fixes:
- Update flotr2.min.js to latest github code.
- Add per user detail information (total duration and times executed)
for each query reported in "Time consuming queries", "Most frequent
queries" "and Normalized slowest queries". The details are visible
from a new button called "User(s) involved" near the "Examples"
button. Thanks to Guillaume Le Bihan for the patch and tsn77130 for
the feature request.
- pgbadger_tool: add tool to output top queries to CSV format, for
follow-up analysis. Thanks to briklen for the patch.
- Add geometric operators to SQL beautifier. Thanks to Rodolphe
Quiedeville for the report.
- Fix non closing session when a process crash with message:
"terminating connection because of crash of another server process".
Thanks to Mael Rimbault for the report.
- Add -L|--logfile-list command line option to read a list of logfiles
from a file. Thanks to Hubert depesz Lubaczewski for the feature
request.
- Automatically remove %q from prefix. Thanks to mbecroft for report.
- Do not store DEALLOCATE log entries anymore.
- Fix queries histogram where range was not appears in the right order.
Thanks to Grzegorz Garlewicz for the report.
- Fix min yaxis in histogram graph. Thanks to grzeg1 for the patch.
- Add --log-duration command line option to force pgbadger to associate
log entries generated by both log_duration = on and log_statement=all.
Thanks to grzeg1 for the feature request.
- Small typographical corrections. Thanks to Jefferson Queiroz Venerando
and Bill Mitchell the patches.
- Reformat usage output and add explanation of the --noreport command
line option.
- Fix documentation about minimal pattern in custom log format. Thanks
to Julien Rouhaud for the suggestion.
- Add support to log_timezones with + and - signs to timestamp with
milliseconds (%m). Thanks to jacksonfoz for the patch.
pgbadger was not recognize log files with timezones like 'GMT+3'.
- Add --noreport command line option to instruct pgbadger to not build
any reports in incremental mode. pgBadger will only create binary
files. Thanks to hubert Depesz Lubaczewski for the feature request.
- Add time consuming information in tables of Queries per database...
Thanks to Thomas for the feature request.
- Add more details about the CSV parser error. It now prints the line
number and the last parameter that generate the failure. This should
allow to see the malformed log entry.
- Change substitution markup in attempt to fix a new look-behind
assertions error. Thanks to Paolo Cavallini for the report.
- Use bar graph for Histogram of query times and sessions times.
- Fix wrong count of min/max queries per second. Thanks to Guillaume
Lelarge for the report. Add COPY statement to SELECT or INSERT
statements statistics following the copy direction (stdin or stdout).
- Fix Illegal division by zero at line 3832. Thanks to MarcoTrek for
the report.
- Add "Average queries per session" and "Average queries duration per
session" in Sessions tab of the Global stat. Thanks to Guillaume
Lelarge for the feature request.
- Reformat numbers in pie graph tracker. Thanks to jirihlinka for the
report.
- pgbadger_tools: Add --explain-time-consuming and --explain-normalized
to generate explain statement about top time consuming and top
normalized slowest queries. Thanks to Josh Kupershmid fot the feature
request.
- Remove everything than error information from json output when -w |
--watch-mode is enable. Thanks to jason.
- Fix undefined subroutine encode_json when using -x json. Thanks to
jason for the report.
- Add auto detection of client=%h or remote=%h from the log so that
adding a prefix is not needed when it respect the default of pgbadger.
- Redefine sessions duration histogram bound to be more accurate. Thanks
to Guillaume Lelarge for the report.
- Add connection time histogram. Thanks to Guillaume Lelarge for the
feature request.
- Add new option -M | --no-multiline to not collect multi-line statement
to avoid garbage especially on errors that generate a huge report.
- Do not return SUCCESS error code 0 when aborted or something fails.
Thanks to Bruno Almeida for the patch.
2014-10-07 version 6.2
This is a maintenance release to fix a regression in SQL traffic graphs and
fix some other minor issues.
The release also add a new option -D or --dns-resolv to map client ip addresses
to FQDN without having log_hostname enabled on the postgresql's configuration
- Do not display queries in Slowest individual, Time consuming and
Normalized slowest queries reports when there is no duration in
log file. Display NO DATASET instead.
- Fix min/max queries in SQL traffic that was based on duration instead
of query count.
- Fix wrong unit to Synced files in Checkpoints files report. Thanks
to Levente Birta for the report.
- Enable allow_loose_quotes in Text::CSV_XS call to fix CSV parsing
error when fields have quote inside an unquoted field. Thanks to
Josh Berkus for the report.
- Add -D | --dns-resolv command line option to replace ip addresses
by their DNS name. Be warned that this can slow down pgBagder a lot.
Thanks to Jiri Hlinka for the feature request.
2014-09-25 version 6.1
This release fix some issues and adds some new features. It adds a new option
-B or --bar-graph to use bar instead of line in graphs. It will also keep tick
formatting when zooming.
The release also add a new program: pgbadger_tools to demonstrate how to
works with pgBadger binary files to build your own new feature. The first
tools 'explain-slowest' allow printing of top slowest queries as EXPLAIN
statements. There's also additional options to execute automatically the
statements with EXPLAIN ANALYZE and get the execution plan. See help of the
program for more information or the README file in the tools directory.
Some modifications will change certain behavior:
- The -T | --title text value will now be displayed instead of the
pgBadger label right after the logo. It was previously displayed
on mouse over the pgBadger label.
Here is the full list of changes/fixes:
- Change -T | --title position on pgBadger report. Title now override
the pgBadger label. Thanks to Julien Rouhauld for the patch.
- Add --file-per-query and --format-query option to write each slowest
query in separate file named qryXXX.sql and perform minimal formating
of the queries. Thanks to Rodolphe Quiedeville for the patch.
- Remove debug query from explain-slowest tool.
- Fix surge in sessions number report when an exclusion or inclusion
option (dbname, user, appname, etc.) is used. Thanks to suyah for the
report.
- Fix fatal error when remote log file is 0 size. Thanks to Julien
Rouhaud for the report.
- Allow pgbadger_tools --explain-slowest to automatically execute the
EXPLAIN statements an report the plan. See pgbadger_tools --help for
more explanation.
- Add --analyze option to replace EXPLAIN statements by EXPLAIN
(ANALYZE, VERBOSE, BUFFERS).
- Move pgbadger_tools program and README.tools into the tools/
subdirectory with removing the extension. Add more comments and
explanations.
- Fix case where die with interrupt signal is received when using -e
option. Thanks to Lloyd Albin for the report.
- Add a new program pgbadger_tools to demonstrate how to deal with
pgBadger binary files to build your own new feature. The first one
'explain-slowest' allow printing of top slowest queries as EXPLAIN
statements.
- Keep tick formatting when zooming. Thanks to Julien Rouhaud for the
patch.
- Fix automatic detection of rsyslogd logs. Thanks to David Day for
the report.
- Fix issue in calculating min/max/avg in "General Activity" report. It
was build on the sum of queries duration per minutes instead of each
duration. Thanks to Jayadevan M for the report.
- The same issue remains with percentile that are build using the sum of
duration per minutes and doesn't represent the real queries duration.
- This commit also include a modification in convert_time() method to
reports milliseconds.
- Add -B or --bar-graph command line option to use bar instead of line
in graph. Thanks to Bart Dopheide for the suggestion.
- Fix Checkpoint Wal files usage graph title.
2014-08-08 version 6.0
This new major release adds some new features like automatic cleanup of binary
files in incremental mode or maximum number of weeks for reports retention.
It improve the incremental mode with allowing the use of multiprocessing with
multiple log file.
It also adds report of query latency percentile on the general activity table
(percentiles are 90, 95, 99).
There's also a new output format: JSON. This format is good for sharing data
with other languages, which makes it easy to integrate pgBadger's result into
other monitoring tools.
You may want to expose your reports but not the data, using the --anonymize
option pgBadger will be able to anonymize all literal values in the queries.
Sometime select to copy a query from the report could be a pain. There's now
a click-to-select button in front of each query that allow you to just use
Ctrl+C to copy it on clipboard
The use of the new -X option also allow pgBadger to write out extra files to
the outdir when creating incremental reports. Those files are the CSS and
Javascript code normally repeated in each HTLM files.
Warning: the behavior of pgBadger in incremental mode has changed. It will now
always cleanup the output directory of all the obsolete binary file. If you were
using those files to build your own reports, you can prevent pgBadger to remove
them by using the --noclean option. Note that if you use the retention feature,
all those files in obsolete directories will be removed too.
Here is the complete list of changes.
- Javascript improvement to use only one call of sql_select and
sql_format. Use jQuery selector instead of getElementById to
avoid js errors when not found. Thanks to Julien Rouhaud for the
patches.
- Add -R | --retention command line option to set the maximum number of
week reports to preserve in the output directory for incremental mode.
Thanks to Kong Man for the feature request.
- Session count is immediately decreased when a FATAL error is received
in the current session to prevent overcount of simultaneous session
number. Thanks to Josh Berkus for the report.
- Fix issue in incremental mode when parsing is stopped after rotating
log and rotated log has new lines. The new file was not parsed at all.
Thanks to CZAirwolf for the report.
- Fix revert to single thread when last_line_parsed exists. Thanks to
Bruno Almeida for the report.
- Fix issue in handling SIGTERM/SIGINT that cause pgbadger to continue.
- Add autoclean feature to pgbadger in incremental mode. pgbadger will
now removed automatically obsolete binary files unless you specify
--noclean at command line.
- Add new command line option --anonymize to obscure all literals in
queries/errors to hide confidential data. Thanks to wmorancfi for the
feature request.
- Fix single "SELECT;" as a query in a report. Thanks to Marc Cousin for
the report.
- Add a copy icon in front of each query in the report to select the
entire query. Thanks to Josh Berkus for the feature request.
- Fix wrong move to beginning of a file if the file was modified after
have been parsed a time. Thanks to Herve Werner for the report.
- Allow pgBadger to write out extra files to outdir when creating
incremental reports. Require the use of the -X or --extra-files option
in incremental mode. Thanks to Matthew Musgrove for the feature request.
- Fix incomplete handling of XZ compressed format.
- Fix move to offset in incremental mode with multiprocess and incomplete
condition when file is smaller than the last offset. Thanks to Herve
Werner for the report.
- Allow/improve incremental mode with multiple log file and multiprocess.
- Fix incorrect location of temporary file storing last parsed line in
multiprocess+incremental mode. Thanks to Herve Werner for the report.
- Fix remote ssh command error sh: 2: Syntax error: "|" unexpected.
Thanks to Herve Werner for the report.
- Fix missing database name in samples of top queries reports. Thanks to
Thomas Reiss for the report.
- Add minimal documentation about JSON output format.
- Add execute attribute to pgbadger in the source repository, some may
find this more helpful when pgbadger is not installed and executed
directly from this repository.
- Fix issue with csv log format and incremental mode. Thanks to Suya for
the report and the help to solve the issue. There also a fix to support
autovacuum statistic with csv format.
- Fix bad URL to documentation. Thanks to Rodolphe Quiedeville for the report.
- Two minor change to made easier to use Tsung scenario: Remove the first
empty line and replace probability by weight. Now it is possible to use
the scenario as is with Tsung 1.5.
- Fix incremental mode where weeks on index page start on sunday and week
reports start on monday. Thanks to flopma and birkosan for the report.
- Replace label "More CPU costly" by "Highest CPU-cost". Thanks to Marc
Cousin for the suggestion.
- Add query latency percentile to General Activity table (percentiles are
90, 95, 99). Thanks to Himanchali for the patch.
- Fix typon pgbadger call. Thanks to Guilhem Rambal for the report.
- Add JSON support for output format. JSON format is good for sharing data
with other languages, which makes it easy to integrate pgBadger's result
into other monitoring tools like Cacti or Graphite. Thanks to Shanzhang
Lan for the patch.
- Update documentation about remote mode feature.
- Update documentation to inform that the xz utility should be at least in
version 5.05 to support the --robot command line option. Thanks to Xavier
Millies-Lacroix for the report.
- Fix remote logfile parsing. Thanks to Herve Werner for the report.
2014-05-05 version 5.1-1
- Fix parsing of remote log file, forgot to apply some patches.
Thank to Herve Werner for the report.
2014-05-04 version 5.1
This new release fixes several issues and adds several new features like:
* Support to named PREPARE and EXECUTE queries. They are replaced by
the real prepare statement and reported into top queries.
* Add new --exclude-line command line option for excluding immediately
log entries matching any regex.
* Included remote and client information into the most frequent events.
* pgBadger is now able to parse remote logfiles using a password less
ssh connection and generate locally the reports.
* Histogram granularity can be adjusted using the -A command line option.
* Add new detail information on top queries to show when the query is a
bind query.
* Support to logfile compressed using the xz compression format.
* Change week/day menu in incremental index, it is now represented as
usual with a calendar view per month.
* Fix various compatibility issue with Windows and Perl 5.8
Here is the full list of changes:
- fixed calendar display and correct typo. Thanks to brunomgalmeida
for the patch.
- revert to single thread if file is small. Thanks to brunomgalmeida
for the patch.
- print calendars 4+4+4 instead of 3+4+4+1 when looking at full year.
Thanks to brunomgalmeida for the patch.
- Add --exclude-line option for excluding log entries with a regex based
on the full log line. Thanks to ferfebles for the feature request.
- Fix SQL keywords that was beautified twice.
- Remove duplicate pg_keyword in SQL beautifier.
- Fix increment of session when --disable-session is activated.
- Fix missing unit in Checkpoints Activity report when time value is
empty. Thanks to Herve Werner for the report.
- Fix double information in histogram data when period is the hour.
- Add support to named PREPARE and EXECUTE queries. Calls to EXECUTE
statements are now replaced by the prepared query and show samples
with parameters. Thanks to Brian DeRocher for the feature request.
- Included Remote and Client information into the most frequent events
examples. Thanks to brunomgalmeida for the patch.
- Fix documentation about various awkward phrasings, grammar, and
spelling. Consistently capitalize "pgBadger" as such, except for
command examples which should stay all-lowercase. Thanks to Josh
Kupershmidt for the patch.
- Fix incremental mode on Windows by replacing %F and %u POSIX::strftime
format to %Y-%m-%d and %w. Thanks to dthiery for the report.
- Remove Examples button when there is no examples available.
- Fix label on tips in histogram of errors reports.
- Fix error details in incremental mode in Most Frequent Errors/Events
report. Thanks to Herve Werner for the report.
- Fix Sync time value in Checkpoints buffers report. Thanks to Herve
Werner for the report.
- Fix wrong connections per host count. Thanks to Herve Werner for the
report.
- Allow pgBadger to parse remote log file using a password less ssh
connection. Thanks to Orange OLPS department for the feature request.
- Histogram granularity can be adjusted using the -A command line
option. By default they will report the mean of each top queries or
errors occurring per hour. You can now specify the granularity down to
the minute. Thanks to Orange OLPS department for the feature request.
- Add new detail information on top queries to show when the query is
a bind query. Thanks to Orange OLPS department for the feature request.
- Fix queries that exceed the size of the container.
- Add unit (seconds) to checkpoint write/sync time in the checkpoints
activity report. Thanks to Orange OLPS department for the report.
- Fix missing -J option in usage.
- Fix incomplete lines in split logfile to rewind to the beginning of
the line. Thanks to brunomgalmeida for the patch.
- Fix tsung output and add tsung xml header sample to output file.
- Make it possible to do --sample 0 (prior it was falling back to the
default of 3). Thanks to William Moran for the patch.
- Fix xz command to be script readable and always have size in bytes:
xz --robot -l %f | grep totals | awk "{print $5}"
- Add support to logfile compressed by the xz compression format.
Thanks to Adrien Nayrat for the patch.
- Do not increment queries duration histogram when prepare|parse|bind
log are found, but only with execute log. Thanks to Josh Berkus for
the report.
- Fix normalization of error message about unique violation when
creating intermediate dirs. Thanks to Tim Sampson for the report.
- Allow use of Perl metacharacters like [..] in application name.
Thanks to Magnus Persson for the report.
- Fix dataset tip to be displayed above image control button. Thanks
to Ronan Dunklau for the fix.
- Renamed the Reset bouton to "To Chart" to avoid confusion with unzoom
feature.
- Fix writing of empty incremental last parsed file.
- Fix several other graphs
- Fix additional message at end of query or error when it was logged
from application output. Thanks to Herve Werner for the report.
- Fix checkpoint and vacuum graphs when all dataset does not have all
values. Thanks to Herve Werner for the report.
- Fix week numbered -1 in calendar view.
- Change week/day menu in incremental index, it is now represented as
usual with a calendar view per month. Thanks to Thom Brown for the
feature request.
- Load FileHandle to fix error: Can not locate object method "seek"
via package "IO::Handle" with perl 5.8. Thanks to hkrizek for the
report.
- Fix count of queries in progress bar when there is compressed file
and multiprocess is enabled. Thanks to Johnny Tan for the report.
- Fix debug message "Start parsing at offset"
- Add ordering in queries times histogram. Thanks to Ulf Renman for
the report.
- Fix various typos. Thanks to Thom Brown for the patch.
- Fix Makefile error, "WriteMakefile: Need even number of args at
Makefile.PL" with Perl 5.8. Thanks to Fangr Zhang for the report.
- Fix some typo in Changelog
2014-02-05 version 5.0
This new major release adds some new features like incremental mode and SQL
queries times histogram. There is also a hourly graphic representation of the
count and average duration of top normalized queries. Same for errors or events,
you will be able to see graphically at which hours they are occurring the most
often.
The incremental mode is an old request issued at PgCon Ottawa 2012 that concern
the ability to construct incremental reports with successive runs of pgBadger.
It is now possible to run pgbadger each days or even more, each hours, and have
cumulative reports per day and per week. A top index page allow you to go
directly to the weekly and daily reports.
This mode have been build with simplicity in mind so running pgbadger by cron
as follow:
0 23 * * * pgbadger -q -I -O /var/www/pgbadger/ /var/log/postgresql.log
is enough to have daily and weekly reports viewable using your browser.
You can take a look at a sample report at http://dalibo.github.io/pgbadger/demov5/index.html
There's also a useful improvement to allow pgBadger to seek directly to the
last position in the same log file after a successive execution. This feature
is only available using the incremental mode or the -l option and parsing a
single log file. Let's say you have a weekly rotated log file and want to run
pgBadger each days. With 2GB of log per day, pgbadger was spending 5 minutes
per block of 2 GB to reach the last position in the log, so at the end of the
week this feature will save you 35 minutes. Now pgBadger will start parsing
new log entries immediately. This feature is compatible with the multiprocess
mode using -j option (n processes for one log file).
Histogram of query times is a new report in top queries slide that shows the
query times distribution during the analyzed period. For example:
Range Count Percentage
--------------------------------------------
0-1ms 10,367,313 53.52%
1-5ms 799,883 4.13%
5-10ms 451,646 2.33%
10-25ms 2,965,883 15.31%
25-50ms 4,510,258 23.28%
50-100ms 180,975 0.93%
100-500ms 87,613 0.45%
500-1000ms 5,856 0.03%
1000-10000ms 2,697 0.01%
> 10000ms 74 0.00%
There is also some graphic and report improvements, like the mouse tracker
formatting that have been reviewed. It now shows a vertical crosshair and
all dataset values at a time when mouse pointer moves over series. Automatic
queries formatting has also been changed, it is now done on double click
event as simple click was painful when you want to copy some part of the
queries.
The report "Simultaneous Connections" has been relabeled into "Established
Connections", it is less confusing as many people think that this is the number
of simultaneous sessions, which is not the case. It only count the number of
connections established at same time.
Autovacuum reports now associate database name to the autovacuum and autoanalyze
entries. Statistics now refer to "dbname.schema.table", previous versions was only
showing the pair "schema.table".
This release also adds Session peak information and a report about Simultaneous
sessions. Parameters log_connections and log_disconnections must be enabled in
postgresql.conf.
Complete ChangeLog:
- Fix size of SQL queries columns to prevent exceeding screen width.
- Add new histogram reports on top normalized queries and top errors
or event. It shows at what hours and in which quantity the queries
or errors appears.
- Add seeking to last parser position in log file in incremental mode.
This prevent parsing all the file to find the last line parse from
previous run. This only works when parsing a single flat file, -j
option is permitted. Thanks to ioguix for the kick.
- Rewrite reloading of last log time from binary files.
- Fix missing statistics of last parsed queries in incremental mode.
- Fix bug in incremental mode that prevent reindexing a previous day.
Thanks to Martin Prochazka for the great help.
- Fix missing label "Avg duration" on column header in details of Most
frequent queries (N).
- Add vertical crosshair on graph.
- Fix case where queries and events was not updated when using -b and
-e command line. Thanks to Nicolas Thauvin for the report.
- Fix week sorting on incremental report main index page. Thanks to
Martin Prochazka for the report.
- Add "Histogram of query times" report to show statistics like
0-100ms : 80%, 100-500ms :14%, 500-1000ms : 3%, > 1000ms : 1%.
Thanks to tmihail for the feature request.
- Format mouse tracker on graphs to show all dataset value at a time.
- Add control of -o vs -O option with incremental mode to prevent
wrong use.
- Change log level of missing LAST_PARSED.tmp file to WARNING and
add a HINT.
- Update copyright date to 2014
- Fix empty reports of connections. Thanks to Reeshna Ramakrishnan
for the report.
- Fix display of connections peak when no connection was reported.
- Fix warning on META_MERGE for ExtUtils::MakeMaker < 6.46. Thanks
to Julien Rouhaud for the patch.
- Add documentation about automatic incremental mode.
- Add incremental mode to pgBadger. This mode will build a report
per day and a cumulative report per week. It also create an index
interface to easiest access to the different report. Must be run,
for example, as:
pgbadger /var/log/postgresql.log.1 -I -O /var/www/pgbadger/
after a daily PostgreSQL log file rotation.
- Add -O | --outdir path to specify the directory where out file
must be saved.
- Automatic queries formatting is now done on double click event,
simple click was painful when you want to copy some part of the
queries. Thanks to Guillaume Smet for the feature request.
- Remove calls of binmode to force html file output to be utf8 as
there is some bad side effect. Thanks to akorotkov for the report.
- Remove use of Time::HiRes Perl module as some distributions does
not include this module by default in core Perl install.
- Fix "Wide character in print" Perl message by setting binmode
to :utf8. Thanks to Casey Allen Shobe for the report.
- Fix application name search regex to handle application name with
space like "pgAdmin III - Query Tool".
- Fix wrong timestamps saved with top queries. Thanks to Herve Werner
for the report.
- Fix missing logs types statitics when using binary mode. Thanks to
Herve Werner for the report.
- Fix Queries by application table column header: Database replaced
by Application. Thanks to Herve Werner for the report.
- Add "Max number of times the same event was reported" report in
Global stats Events tab.
- Replace "Number of errors" by "Number of ERROR entries" and add
"Number of FATAL entries".
- Replace "Number of errors" by "Number of events" and "Total errors
found" by "Total events found" in Events reports. Thanks to Herve
Werner for the report.
- Fix title error in Sessions per database.
- Fix clicking on the info link to not go back to the top of the page.
Thanks to Guillaume Smet for the report and solution.
- Fix incremental report from binary output where binary data was not
loaded if no queries were present in log file. Thanks to Herve Werner
for the report.
- Fix parsing issue when log_error_verbosity = verbose. Thanks to vorko
for the report.
- Add Session peak information and a report about Simultaneous sessions.
log_connections+log_disconnections must be enabled in postgresql.conf.
- Fix wrong requests number in Queries by user and by host. Thanks to
Jehan-Guillaume de Rorthais for the report.
- Fix issue with rsyslog format failing to parse logs. Thanks to Tim
Sampson for the report.
- Associate autovacuum and autoanalyze log entry to the corresponding
database name. Thanks to Herve Werner for the feature request.
- Change "Simultaneous Connections" label into "Established Connections",
it is less confusing as many people think that this is the number of
simultaneous sessions, which is not the case. It only count the number
of connections established at same time. Thanks to Ronan Dunklau for
the report.
2013-11-08 version 4.1
This release fixes two major bugs and some others minor issues. There's also a
new command line option --exclude-appname that allow exclusion from the report
of queries generated by a specific program, like pg_dump. Documentation have
been updated with a new chapter about building incremental reports.
- Add log_autovacuum_min_duration into documentation in chapter about
postgresql configuration directives. Thanks to Herve Werner for the
report.
- Add chapter about "Incremental reports" into documentation.
- Fix reports with per minutes average where last time fraction was
not reported. Thanks to Ludovic Levesque and Vincent Laborie for the
report.
- Fix unterminated comment in information popup. Thanks to Ronan
Dunklau for the patch.
- Add --exclude-appname command line option to eliminate unwanted
traffic generated by a specific application. Thanks to Steve Crawford
for the feature request.
- Allow external links use into URL to go to a specific report. Thanks
to Hubert depesz Lubaczewski for the feature request.
- Fix empty reports when parsing compressed files with the -j option
which is not allowed with compressed file. Thanks to Vincent Laborie
for the report.
- Prevent progress bar length to increase after 100% when real size is
greater than estimated size (issue found with huge compressed file).
- Correct some spelling and grammar in ChangeLog and pgbadger. Thanks
to Thom Brown for the patch.
- Fix major bug on SQL traffic reports with wrong min value and bad
average value on select reports, add min/max for select queries.
Thanks to Vincent Laborie for the report.
2013-10-31 - Version 4.0
This major release is the "Say goodbye to the fouine" release. With a full
rewrite of the reports design, pgBadger has now turned the HTML reports into
a more intuitive user experience and professional look.
The report is now driven by a dynamic menu with the help of the embedded
Bootstrap library. Every main menu corresponds to a hidden slide that is
revealed when the menu or one of its submenus is activated. There's
also the embedded font Font Awesome webfont to beautify the report.
Every statistics report now includes a key value section that immediately
shows you some of the relevant information. Pie charts have also been
separated from their data tables using two tabs, one for the chart and the
other one for the data.
Tables reporting hourly statistics have been moved to a multiple tabs report
following the data. This is used with General (queries, connections, sessions),
Checkpoints (buffer, files, warnings), Temporary files and Vacuums activities.
There's some new useful information shown in the key value sections. Peak
information shows the number and datetime of the highest activity. Here is the
list of those reports:
- Queries peak
- Read queries peak
- Write queries peak
- Connections peak
- Checkpoints peak
- WAL files usage Peak
- Checkpoints warnings peak
- Temporary file size peak
- Temporary file number peak
Reports about Checkpoints and Restartpoints have been merged into a single report.
These are almost one in the same event, except that restartpoints occur on a slave
cluster, so there was no need to distinguish between the two.
Recent PostgreSQL versions add additional information about checkpoints, the
number of synced files, the longest sync and the average of sync time per file.
pgBadger collects and shows this information in the Checkpoint Activity report.
There's also some new reports:
- Prepared queries ratio (execute vs prepare)
- Prepared over normal queries
- Queries (select, insert, update, delete) per user/host/application
- Pie charts for tables with the most tuples and pages removed during vacuum.
The vacuum report will now highlight the costly tables during a vacuum or
analyze of a database.
The errors are now highlighted by a different color following the level.
A LOG level will be green, HINT will be yellow, WARNING orange, ERROR red
and FATAL dark red.
Some changes in the binary format are not backward compatible and the option
--client has been removed as it has been superseded by --dbclient for a long time now.
If you are running a pg_dump or some batch process with very slow queries, your
report analysis will be hindered by those queries having unwanted prominence in the
report. Before this release it was a pain to exclude those queries from the
report. Now you can use the --exclude-time command line option to exclude all
traces matching the given time regexp from the report. For example, let's say
you have a pg_dump at 13:00 each day during half an hour, you can use pgbadger
as follows:
pgbadger --exclude-time "2013-09-.* 13:.*" postgresql.log
If you are also running a pg_dump at night, let's say 22:00, you can write it
as follows:
pgbadger --exclude-time '2013-09-\d+ 13:[0-3]' --exclude-time '2013-09-\d+ 22:[0-3]' postgresql.log
or more shortly:
pgbadger --exclude-time '2013-09-\d+ (13|22):[0-3]' postgresql.log
Exclude time always requires the iso notation yyyy-mm-dd hh:mm:ss, even if log
format is syslog. This is the same for all time-related options. Use this option
with care as it has a high cost on the parser performance.
2013-09-17 - version 3.6
Still an other version in 3.x branch to fix two major bugs in vacuum and checkpoint
graphs. Some other minors bugs has also been fixed.
- Fix grammar in --quiet usage. Thanks to stephen-a-ingram for the report.
- Fix reporting period to starts after the last --last-parsed value instead
of the first log line. Thanks to Keith Fiske for the report.
- Add --csv-separator command line usage to documentation.
- Fix CSV log parser and add --csv-separator command line option to allow
change of the default csv field separator, coma, in any other character.
- Avoid "negative look behind not implemented" errors on perl 5.16/5.18.
Thanks to Marco Baringer for the patch.
- Support timestamps for begin/end with fractional seconds (so it'll handle
postgresql's normal string representation of timestamps).
- When using negative look behind set sub-regexp to -i (not case insensitive)
to avoid issues where some upper case letter sequence, like SS or ST.
- Change shebang from /usr/bin/perl to /usr/bin/env perl so that user-local
(perlbrew) perls will get used.
- Fix empty graph of autovacuum and autoanalyze.
- Fix checkpoint graphs that was not displayed any more.
2013-07-11 - Version 3.5
Last release of the 3.x branch, this is a bug fix release that also adds some
pretty print of Y axis number on graphs and a new graph that groups queries
duration series that was shown as second Y axis on graphs, as well as a new
graph with number of temporary file that was also used as second Y axis.
- Split temporary files report into two graphs (files size and number
of file) to no more used a second Y axis with flotr2 - mouse tracker
is not working as expected.
- Duration series representing the second Y axis in queries graph have
been removed and are now drawn in a new "Average queries duration"
independant graph.
- Add pretty print of numbers in Y axis and mouse tracker output with
PB, TB, GB, KB, B units, and seconds, microseconds. Number without
unit are shown with P, T, M, K suffix for easiest very long number
reading.
- Remove Query type reports when log only contains duration.
- Fix display of checkpoint hourly report with no entry.
- Fix count in Query type report.
- Fix minimal statistics output when nothing was load from log file.
Thanks to Herve Werner for the report.
- Fix several bug in log line parser. Thanks to Den Untevskiy for the
report.
- Fix bug in last parsed storage when log files was not provided in the
right order. Thanks to Herve Werner for the report.
- Fix orphan lines wrongly associated to previous queries instead of
temporary file and lock logged statement. Thanks to Den Untevskiy for
the report.
- Fix number of different samples shown in events report.
- Escape HTML tags on error messages examples. Thanks to Mael Rimbault
for the report.
- Remove some temporary debug informations used with some LOG messages
reported as events.
- Fix several issues with restartpoint and temporary files reports.
Thanks to Guillaume Lelarge for the report.
- Fix issue when an absolute path was given to the incremental file.
Thanks to Herve Werner for the report.
- Remove creation of incremental temp file $tmp_last_parsed when not
running in multiprocess mode. Thanks to Herve Werner for the report.
2013-06-18 - Version 3.4
This release adds lot of graphic improvements and a better rendering with logs
over few hours. There's also some bug fixes especially on report of queries that
generate the most temporary files.
- Update flotr2.min.js to latest github code.
- Add mouse tracking over y2axis.
- Add label/legend information to ticks displayed on mouseover graphs.
- Fix documentation about log_statement and log_min_duration_statement.
Thanks to Herve Werner for the report.
- Fix missing top queries for locks and temporary files in multiprocess
mode.
- Cleanup code to remove storage of unused information about connection.
- Divide the huge dump_as_html() method with one method per each report.
- Checkpoints, restart points and temporary files are now drawn using a
period of 5 minutes per default instead of one hour. Thanks to Josh
Berkus for the feature request.
- Change fixed increment of one hour to five minutes on queries graphs
"SELECT queries" and "Write queries". Remove graph "All queries" as,
with a five minutes increment, it duplicates the "Queries per second".
Thanks to Josh Berkus for the feature request.
- Fix typos. Thanks to Arsen Stasic for the patch.
- Add default HTML charset to utf-8 and a command line option --charset
to be able to change the default. Thanks to thomas hankeuhh for the
feature request.
- Fix missing temporary files query reports in some conditions. Thanks
to Guillaume Lelarge and Thomas Reiss for the report.
- Fix some parsing issue with log generated by pg 7.4.
- Update documentation about missing new reports introduced in previous
version 3.3.
Note that it should be the last release of the 3.x branch unless there's major
bug fixes, but next one will be a major release with a completely new design.
2013-05-01 - Version 3.3
This release adds four more useful reports about queries that generate locks and
temporary files. An other new report about restart point on slaves and several
bugs fix or cosmetic change. Support to parallel processing under Windows OS has
been removed.
- Remove parallel processing under Windows platform, the use of waitpid
is freezing pgbadger. Thanks to Saurabh Agrawal for the report. I'm
not comfortable with that OS this is why support have been removed,
if someone know how to fix that, please submit a patch.
- Fix Error in tempfile() under Windows. Thanks to Saurabh Agrawal for
the report.
- Fix wrong queries storage with lock and temporary file reports. Thanks
to Thomas Reiss for the report.
- Add samples queries to "Most frequent waiting queries" and "Queries
generating the most temporary files" report.
- Add two more reports about locks: 'Most frequent waiting queries (N)",
and "Queries that waited the most". Thanks to Thomas Reiss for the
patch.
- Add two reports about temporary files: "Queries generating the most
temporary files (N)" and "Queries generating the largest temporary
files". Thanks to Thomas Reiss for the patch.
- Cosmetic change to the Min/Max/Avg duration columns.
- Fix report of samples error with csvlog format. Thanks to tpoindessous
for the report.
- Add --disable-autovacuum to the documentation. Thanks to tpoindessous
for the report.
- Fix unmatched ) in regex when using %s in prefix.
- Fix bad average size of temporary file in Overall statistics report.
Thanks to Jehan Guillaume de Rorthais for the report.
- Add restartpoint reporting. Thanks to Guillaume Lelarge for the patch.
- Made some minor change in CSS.
- Replace %% in log line prefix internally by a single % so that it
could be exactly the same than in log_line_prefix. Thanks to Cal
Heldenbrand for the report.
- Fix perl documentation header, thanks to Cyril Bouthors for the patch.
2013-04-07 - Version 3.2
This is mostly a bug fix release, it also adds escaping of HTML code inside
queries and the adds Min/Max reports with Average duration in all queries
reports.
- In multiprocess mode, fix case where pgbadger does not update
the last-parsed file and do not take care of the previous run.
Thanks to Kong Man for the report.
- Fix case where pgbadger does not update the last-parsed file.
Thanks to Kong Man for the report.
- Add CDATA to make validator happy. Thanks to Euler Taveira de
Oliveira for the patch.
- Some code review by Euler Taveira de Oliveira, thanks for the
patch.
- Fix case where stat were multiplied by N when -J was set to N.
Thanks to thegnorf for the report.
- Add a line in documentation about log_statement that disable
log_min_duration_statement when it is set to all.
- Add quick note on how to contribute, thanks to Damien Clochard
for the patch.
- Fix issue with logs read from stdin. Thanks to hubert depesz
lubaczewski for the report.
- Force pgbadger to not try to beautify queries bigger than 10kb,
this will take too much time. This value can be reduce in the
future if hang with long queries still happen. Thanks to John
Rouillard for the report.
- Fix an other issue in replacing bind param when the bind value
is alone on a single line. Thanks to Kjeld Peters for the report.
- Fix parsing of compressed files together with uncompressed files
using the the -j option. Uncompressed files are now processed using
split method and compressed ones are parsed per one dedicated process.
- Replace zcat by gunzip -c to fix an issue on MacOsx. Thanks to
Kjeld Peters for the report.
- Escape HTML code inside queries. Thanks to denstark for the report.
- Add Min/Max in addition to Average duration values in queries reports.
Thanks to John Rouillard fot the feature request.
- Fix top slowest array size with binary format.
- Fix an other case with bind parameters with value in next line and
the top N slowest queries that was repeated until N even if the real
number of queries was lower. Thanks to Kjeld Peters for the reports.
- Fix non replacement of bind parameters where there is line breaks in
the parameters, aka multiline bind parameters. Thanks to Kjeld Peters
for the report.
- Fix error with seekable export tag with Perl v5.8. Thanks to Jeff Bohmer
for the report.
- Fix parsing of non standard syslog lines begining with a timestamp like
"2013-02-28T10:35:11-05:00". Thanks to Ryan P. Kelly for the report.
- Fix issue #65 where using -c | --dbclient with csvlog was broken. Thanks
to Jaime Casanova for the report.
- Fix empty report in watchlog mode (-w option).
2013-02-21 - Version 3.1
This is a quick release to fix missing reports of most frequent errors and slowest
normalized queries in previous version published yesterday.
- Fix empty report in watchlog mode (-w option).
- Force immediat die on command line options error.
- Fix missing report of most frequent events/errors report. Thanks to
Vincent Laborie for the report.
- Fix missing report of slowest normalized queries. Thanks to Vincent
Laborie for the report.
- Fix display of last print of progress bar when quiet mode is enabled.
2013-02-20 - Version 3.0
This new major release adds parallel log processing by using as many cores as
wanted to parse log files, the performances gain is directly related to the
number of cores specified. There's also new reports about autovacuum/autoanalyze
informations and many bugs have been fixed.
- Update documentation about log_duration, log_min_duration_statement
and log_statement.
- Rewrite dirty code around log timestamp comparison to find timestamp
of the specified begin or ending date.
- Remove distinction between logs with duration enabled from variables
log_min_duration_statement and log_duration. Commands line options
--enable-log_duration and --enable-log_min_duration have been removed.
- Update documentation about parallel processing.
- Remove usage of Storable::file_magic to autodetect binary format file,
it is not include in core perl 5.8. Thanks to Marc Cousin for the
report.
- Force multiprocess per file when files are compressed. Thanks to
Julien Rouhaud for the report.
- Add progress bar logger for multiprocess by forking a dedicated
process and using pipe. Also fix some bugs in using binary format
that duplicate query/error samples per process.
- chmod 755 pgbadger
- Fix checkpoint reports when there is no checkpoint warnings.
- Fix non report of hourly connections/checkpoint/autovacuum when not
query is found in log file. Thanks to Guillaume Lelarge for the
report.
- Add better handling of signals in multiprocess mode.
- Add -J|--job_per_file command line option to force pgbadger to use
one process per file instead of using all to parse one file. Useful
to have better performances with lot of small log file.
- Fix parsing of orphan lines with stderr logs and log_line_prefix
without session information into the prefix (%l).
- Update documentation about -j | --jobs option.
- Allow pgbadger to use several cores, aka multiprocessing. Add options
-j | --jobs option to specify the number of core to use.
- Add autovacuum and autoanalyze infos to binary format.
- Fix case in SQL code highlighting where QQCODE temp keyword was not
replaced. Thanks to Julien Ruhaud for the report.
- Fix CSS to draw autovacuum graph and change legend opacity.
- Add pie graph to show repartition of number of autovacuum per table
and number of tuples removed by autovacuum per table.
- Add debug information about selected type of log duration format.
- Add report of tuples/pages removed in report of Vacuums by table.
- Fix major bug on syslog parser where years part of the date was
wrongly extracted from current date with logs generated in 2012.
- Fix issue with Perl 5.16 that do not allow "ss" inside look-behind
assertions. Thanks to Cedric for the report.
- New vacuum and analyze hourly reports and graphs. Thanks to Guillaume
Lelarge for the patch.
UPGRADE: if you are running pgbadger by cron take care if you were using one of
the following option: --enable-log_min_duration and --enable-log_duration, they
have been removed and pgbadger will refuse to start.
2013-01-17 - Version 2.3
This release fixes several major issues especially with csvlog and a memory leak
with log parsing using a start date. There's also several improvement like new
reports of number of queries by database and application. Mouse over reported
queries will show database, user, remote client and application name where they
are executed.
A new binary input/output format have been introduced to allow saving or reading
precomputed statistics. This will allow incremental reports based on periodical
runs of pgbader. This is a work in progress fully available with next coming
major release.
Several SQL code beautifier improvement from pgFormatter have also been merged.
- Clarify misleading statement about log_duration: log_duration may be
turned on depending on desired information. Only log_statement must
not be on. Thanks to Matt Romaine for the patch.
- Fix --dbname and --dbuser not working with csvlog format. Thanks to
Luke Cyca for the report.
- Fix issue in SQL formatting that prevent left back indentation when
major keywords were found. Thanks to Kevin Brannen for the report.
- Display 3 decimals in time report so that ms can be seen. Thanks to
Adam Schroder for the request.
- Force the parser to not insert a new line after the SET keyword when
the query begin with it. This is to preserve the single line with
queries like SET client_encoding TO "utf8";
- Add better SQL formatting of update queries by adding a new line
after the SET keyword. Thanks to pilat66 for the report.
- Update copyright and documentation.
- Queries without application name are now stored under others
application name.
- Add report of number of queries by application if %a is specified in
the log_line_prefix.
- Add link menu to the request per database and limit the display of
this information when there is more than one database.
- Add report of requests per database.
- Add report of user,remote client and application name to all request
info.
- Fix memory leak with option -b (--begin) and in incremental log
parsing mode.
- Remove duration part from log format auto-detection. Thanks to
Guillaume Lelarge for the report.
- Fix a performance issue on prettifying SQL queries that makes pgBagder
several time slower that usual to generate the HTML output. Thanks to
Vincent Laborie for the report.
- Add missing SQL::Beautify paternity.
- Add 'binary' format as input/output format. The binary output format
allows to save log statistics in a non human readable file instead of
an HTML or text file. These binary files might then be used as regular
input files, combined or not, to produce a html or txt report. Thanks
to Jehan Guillaume de Rorthais for the patch.
- Remove port from the session regex pattern to match all lines.
- Fix the progress bar. It was trying to use gunzip to get real file
size for all formats (by default). Unbreak the bz2 format (that does
not report real size) and add support for zip format. Thanks to Euler
Taveira de Oliveira fort the patch.
- Fix some typos and grammatical issues. Thanks to Euler Taveira de
Oliveira fort the patch.
- Improve SQL code highlighting and keywords detection merging change
from pgFormatter project.
- Add support to hostname or ip address in the client detection. Thanks
to stuntmunkee for the report.
- pgbadger will now only reports execute statement of the extended
protocol (parse/bind/execute). Thanks to pierrestroh for the report.
- Fix numerous typos as well as formatting and grammatical issues.
Thanks to Thom Brown for the patch.
- Add backward compatibility to obsolete --client command line option.
If you were using the short option -c nothing is changed.
- Fix issue with --dbclient and %h in log_line_prefix. Thanks to Julien
Rouhaud for the patch.
- Fix multiline progress bar output.
- Allow usage of a dash into database, user and application names when
prefix is used. Thanks to Vipul for the report.
- Mouse over queries will now show in which database they are executed
in the overviews (Slowest queries, Most frequent queries, etc. ).
Thank to Dirk-Jan Bulsink for the feature request.
- Fix missing keys on %cur_info hash. Thanks to Marc Cousin for the
report.
- Move opening file handle to log file into a dedicated function.
Thanks to Marc Cousin for the patch.
- Replace Ctrl+M by printable \r. Thanks to Marc Cousin for the report.
2012-11-13 - Version 2.2
This release add some major features like tsung output, speed improvement with
csvlog, report of shut down events, new command line options to generate report
excluding some user(s), to build report based on select queries only, to specify
regex of the queries that must only be included in the report and to remove
comments from queries. Lot of bug fixes, please upgrade.
- Update PostgreSQL keywords list for 9.2
- Fix number of queries in progress bar with tsung output.
- Remove obsolete syslog-ng and temporary syslog-ll log format added to
fix some syslog autodetection issues. There is now just one syslog
format: syslog, differences between syslog formats are detected and
the log parser is adaptive.
- Add comment about the check_incremental_position() method
- Fix reports with empty graphs when log files were not in chronological
order.
- Add report of current total of queries and events parsed in progress
bar. Thanks to Jehan-Guillaume de Rorthais for the patch.
- Force pgBadger to use an require the XS version of Text::CSV instead
of the Pure Perl implementation. It is a good bit faster thanks to
David Fetter for the patch. Note that using csvlog is still a bit
slower than syslog or stderr log format.
- Fix several issue with tsung output.
- Add report of shut down events
- Add debug information on command line used to pipe compressed log
file when -v is provide.
- Add -U | --exclude-user command line option to generate report
excluded user. Thanks to Birta Levente for the feature request.
- Allow some options to be specified multiple time or be written as a
coma separated list of value, here are these options: --dbname,
--dbuser, --dbclient, --dbappname, --exclude_user.
- Add -S | --select-only option to build report only on select queries.
- Add first support to tsung output, see usage. Thanks to Guillaume
Lelarge for the feature request.
- Add --include-query and --include-file to specify regex of the queries
that must only be included in the report. Thanks to Marc Cousin for
the feature request.
- Fix auto detection of log_duration and log_min_duration_statement
format.
- Fix parser issue with Windows logs without timezone information.
Thanks to Nicolas Thauvin for the report.
- Fix bug in %r = remote host and port log line prefix detection.
Thanks to Hubert Depesz Lubaczewski for the report.
- Add -C | --nocomment option to remove comment like /* ... */ from
queries. Thanks to Hubert Depesz Lubaczewski for the feature request.
- Fix escaping of log_line_prefix. Thanks to Hubert Depesz Lubaczewski
for the patch.
- Fix wrong detection of update queries when a query has a object names
containing update and set. Thanks to Vincent Laborie for the report.
2012-10-10 - Version 2.1
This release add a major feature by allowing any custom log_line_prefix to be
used by pgBadger. With stderr output you at least need to log the timestamp (%t)
the pid (%p) and the session/line number (%l). Support to log_duration instead
of log_min_duration_statement to allow reports simply based on duration and
count report without query detail and report. Lot of bug fixes, please upgrade
asap.
- Add new --enable-log_min_duration option to force pgbadger to use lines
generated by the log_min_duration_statement even if the log_duration
format is autodetected. Useful if you use both but do not log all queries.
Thanks to Vincent Laborie for the feature request.
- Add syslog-ng format to better handle syslog traces with notation like:
[ID * local2.info]. It is autodetected but can be forced in the -f option
with value set to: syslog-ng.
- Add --enable-log_duration command line option to force pgbadger to only
use the log_duration trace even if log_min_duration_statement traces are
autodetected.
- Fix display of empty hourly graph when no data were found.
- Remove query type report when log_duration is enabled.
- Fix a major bug in query with bind parameter. Thanks to Marc Cousin for
the report.
- Fix detection of compressed log files and allow automatic detection
and uncompress of .gz, .bz2 and .zip files.
- Add gunzip -l command to find the real size of a gzip compressed file.
- Fix log_duration only reports to not take care about query detail but
just count and duration.
- Fix issue with compressed csvlog. Thanks to Philip Freeman for the
report.
- Allow usage of log_duration instead of log_min_duration_statement to
just collect statistics about the number of queries and their time.
Thanks to Vincent Laborie for the feature request.
- Fix issue on syslog format and autodetect with additional info like:
[ID * local2.info]. Thanks to kapsalar for the report.
- Removed unrecognized log line generated by deadlock_timeout.
- Add missing information about unsupported csv log input from stdin.
It must be read from a file. Thank to Philip Freeman for the report.
- Fix issue #28: Illegal division by zero with log file without query
and txt output. Thanks to rlowe for the report.
- Update documentation about the -N | --appname option.
- Rename --name option into --appname. Thanks to Guillaume Lellarge for
the patch.
- Fix min/max value in xasis that was always represented 2 days by
default. Thanks to Casey Allen Shobe for the report.
- Fix major bug when running pgbadger with the -e option. Thanks to
Casey Allen Shobe for the report and the great help
- Change project url to http://dalibo.github.com/pgbadger/. Thanks to
Damien Clochard for this new hosting.
- Fix lot of issues in CSV parser and force locale to be C. Thanks to
Casey Allen Shobe for the reports.
- Improve speed with custom log_line_prefix.
- Merge pull request #26 from elementalvoid/helpdoc-fix
- Fixed help text for --exclude-file. Old help text indicated that the
option name was --exclude_file which was incorrect.
- Remove the obsolete --regex-user and --regex-db options that was used
to specify a search pattern in the log_line_prefix to find the user
and db name. This is replaced by the --prefix option.
- Replace Time column report header by Hour.
- Fix another issue in log_line_prefix parser with stderr format
- Add a more complex example using log_line_prefix
- Fix log_line_prefix issue when using timepstamp with millisecond.
- Add support to use any custom log_line_prefix with new option -p or
--prefix. See README for an example.
- Fix false autodetection of CSV format when log_statement is enable or
in possible other cases. This was resulting in error: "FATAL: cannot
use CSV". Thanks to Thomas Reiss for the report.
- Fix display of empty graph of connections per seconds
- Allow character : in log line prefix, it will no more break the log
parsing. Thanks to John Rouillard for the report.
- Add report of configuration parameter changes into the errors report
and change errors report by events report to handle important messages
that are not errors.
- Allow pgbadger to recognize " autovacuum launcher" messages.
2012-08-21 - version 2.0
This major version adds some changes not backward compatible with previous
versions. Options -p and -g are not more used as progress bar and graphs
generation are enabled by default now.
The obsolete -l option use to specify the log file to parse has been reused to
specify an incremental file. Outside these changes and some bug fix there's
also new features:
* Using an incremental file with -l option allow to parse multiple time a
single log file and to "seek" at the last line parsed during the previous
run. Useful if you have a log rotation not sync with your pgbadger run.
For exemple you can run somthing like this:
pgbadger `find /var/log/postgresql/ -name "postgresql*" -mtime -7 -type f` \
-o report_`date +%F`.html -l /var/run/pgbadger/last_run.log
* All queries diplayed in the HTML report are now clickable to display or
hide a nice SQL query format. This is called SQL format beautifier.
* CSV log parser have been entirely rewritten to handle csv with multiline.
Every one should upgrade.
- Change license from BSD like to PostgreSQL license. Request from
Robert Treat.
- Fix wrong pointer on Connections per host menu. Reported by Jean-Paul
Argudo.
- Small fix for sql formatting adding scrollbars. Patch by Julien
Rouhaud.
- Add SQL format beautifier on SQL queries. When you will click on a
query it will be beautified. Patch by Gilles Darold
- The progress bar is now enabled by default, the -p option has been
removed. Use -q | --quiet to disable it. Patch by Gilles Darold.
- Graphs are now generated by default for HTML output, option -g as
been remove and option -G added to allow disabling graph generation.
Request from Julien Rouhaud, patch by Gilles Darold.
- Remove option -g and -p to the documentation. Patch by Gilles Darold.
- Fix case sensitivity in command line options. Patch by Julien Rouhaud.
- Add -T|--title option to change report title. Patch by Yury Bushmelev.
- Add new option --exclude-file to exclude specific commands with regex
stated in a file. This is a rewrite by Gilles Darold of the neoeahit
(Vipul) patch.
- CSV log parser have been entirely rewritten to handle csv with multi
line, it also adds approximative duration for csvlog. Reported by
Ludhimila Kendrick, patch by Gilles Darold.
- Alphabetical reordering of options list in method usage() and
documentation. Patch by Gilles Darold.
- Remove obsolete -l | --logfile command line option, the -l option
will be reused to specify an incremental file. Patch by Gilles Darold.
- Add -l | --last-parsed options to allow incremental run of pgbadger.
Patch by Gilles Darold.
- Replace call to timelocal_nocheck by timegm_nocheck, to convert date
time into second from the epoch. This should fix timezone issue.
Patch by Gilles Darold.
- Change regex on log parser to allow missing ending space in
log_line_prefix. This seems a common mistake. Patch by Gilles Darold.
- print warning when an empty log file is found. Patch by Gilles Darold.
- Add perltidy rc file to format pgbadger Perl code. Patch from depesz.
2012-07-15 - version 1.2
This version adds some reports and fixes a major issue in log parser. Every one
should upgrade.
- Rewrite this changelog to be human readable.
- Add -v | --verbose to enable debug mode. It is now disable by default
- Add hourly report of checkpoint warning when checkpoints are occuring
too frequently, it will display the hourly count and the average
occuring time.
- Add new report that sums the messages by log types. The report shows
the number of messages of each log type, and a percentage. It also
displays a pie graph. Patch by Guillaume Lelarge.
- Add missing pie graph on locks by type report.
- Format pie mouse track to display values only.
- Fix graph download button id on new connection graph.
- Add trackFormatter to flotr2 line graphs to show current x/y values.
- Fix issue on per minute minimum value.
- Add a note about Windows Os and zcat as well as a more general note
about using compressed log file in other format than gzip.
- Complete rewrite of the log parser to handle unordered log lines.
Data are now stored by pid before and added to the global statistics
at end. Error report now include full details, statements, contexts
and hints when available. Deadlock are also fully reported with the
concerned queries.
- Fix miss handling of multi lines queries on syslog.
- Add -a|--average option to configure the per minutes average interval
for queries and connexions. If you want the average to be calculated
each minutes instead of the 5 per default, use --average 1 or for the
default --average 5. If you want average per hour set it to 60.
- Add hourly statistics of connections and sessions as well as a chart
about the number of connection per second (5 minutes average).
- Allow OTHERS type of queries lower than 2% to be include in the sum of
types < 2%.
- Add autodetection of syslog ident name if different than the default
"postgres" and that there is just one ident name in the log.
- Remove syslog replacement of tabulation by #011 still visible when
there was multiple tabulation.
- Fix autodetection of log format syslog with single-digit day number
in date.
- Add ChangeLog to MANIFEST and change URI in html footer.
- Check pgBadger compatibility with Windows Oses. Run perfectly.
2012-07-04 - version 1.1
This release fixes lot of issues and adds several main features.
New feature:
- Add possibility to get log from stdin
- Change syslog parsing regex to allow log timestamp in log_line_prefix
very often forgotten when log destination is changed from stderr to
syslog.
- Add documentation for the -z | --zcat command line option.
- Allow `zcat` location to be specified via `--zcat` - David E. Wheeler
- Add --disable-session,--disable-connection and disable-checkpoint
command line options to remove their respective reports from the
output
- Add --disable-query command line option to remove queries statistics
from the output
- Add --disable-hourly command line option to remove hourly statistics
from the output
- Add --disable-error command line option to remove error report from
the output
- Add --exclude-query option to exclude types of queries by specifying
a regex
- Set thousand separator and decimal separator to be locale dependant
- Add -w option to only report errors
- Add Makefile.PL and full POD documentation to the project
- Allow multiple log files from command line
- Add simple csvlog support - Alex Hunsaker
- Hourly report for temporary files and checkpoints have moved in a
separate table.
- Add hourly connections and sessions statistics.
- Add a chart about the number of connections per seconds.
Bug fix:
- Add information about log format requirement (lc_message = 'C').
Reported by Alain Benard.
- Fix for begin/end dates with single digit day using syslog. Patch by
Joseph Marlin.
- Fix handle of syslog dates with single-digit day number. Patch by
Denis Orlikhin.
- Fix many English syntax in error messages and documentation. Patch by
Joseph Marlin.
- Fix non terminated TH html tag in checkpoint hourly table. Reported
by Joseph Marlin.
- "Log file" section will now only report first and last log file parsed
- Fix empty output in hourly temporary file stats.
- Fix wrapping query that goes out of the table and makes the window
scroll horizontally. Asked by Isaac Reuben.
- Fix code where != was replaced by $$CLASSSY0A$$!=$$CLASSSY0B$$ in the
output. Reported by Isaac Reuben
- Fix and review text report output.
- Fix an issue in SQL code highligh replacement.
- Complete review of the HTML output.
- Add .gitignore for swap files. Patch by Vincent Picavet
- Fix wrong variable for user and database filter. Patch by Vincent
Picavet.
- Change default regexp for user and db to be able to detect both. Patch
by Vincent Picavet.
- Fix false cur_date when using syslog and allow -b and -e options to
work. Patch by Vincent Picavet.
- Fix some case where logs where not detected as PostgreSQL log lines.
- Added explanation for --begin and --end datetime setting. Patch by
ragged.
- Added -v / --version. Patch by ragged.
- Fix usage information and presentation in README file.
2012-05-04 - version to 1.0
First public release of pgBadger.
New feature:
- Add graph of ckeckpoint Wal files usage (added, removed, recycled).
- Add --image-format to allow the change of the default png image
format to jpeg.
- Allow download of all pie graphics as images.
- Add --pie-limit to sum all data lower than this percentage limit to
avoid label overlap.
- Allow download of graphics as PNG images.
- Replace GD::Graph by the Flotr2 javascript library to draw graphics.
Patch by Guillaume Lelarge
- Add pie graphs for session, database, user and host. Add a --quiet
option to remove debug output and --progress to show a progress bar
during log parsing
- Add pie graph for Queries by type.
- Add graph for checkpoint write buffer per hours
- Allow log parsing without any log_line_prefix and extend it to be
defined by the user. Custom log_line prefix can be parsed using user
defined regex with command line option --regex-db and --regex-user.
For exemple the default regex of pgbadger to parse user and db name
from log_line_prefix can be written like this:
pgbadger -l mylogfile.log --regex-user="user=([^,]*)," \
--regex-db="db=([^,]*)"
- Separe log_line_prefix from log level part in the parser to extend
log_line_prefix parsing
- If there is just one argument, assume it is the logfile and use
default value for all other parameters
- Add autodetection of log format (syslog or stderr) if none is given
with option -f
- Add --outfile option to dump output to a file instead of stdout.
Default filename is out.html or out.txt following the output format.
To dump to stdout set filename to -
- Add --version command line option to show current pgbadger version.
Bug fix:
- Rearrange x and y axis
- Fix legend opacity on graphics
- Rearrange Overall stats view
- Add more "normalization" on errors messages
- Fix samples error with normalyzed error instead of real error message
- Fix an other average size of temporary file decimal limit
- Force quiet mode when --progress is used
- Fix per sessions graphs
- Fix sort order of days/hours into hours array
- Fix sort order of days into graphics
- Remove display of locks, sessions and connections statistics when none
are available
- Fix display of empty column of checkpoint when no checkpoint was found
in log file
pgbadger-13.1/HACKING.md 0000664 0000000 0000000 00000002717 14765535576 0014635 0 ustar 00root root 0000000 0000000 # Contributing on pgBadger
Thanks for your attention on pgBadger ! You need Perl Module JSON::XS
to run the full test suite. You can install it on a Debian like system
using:
sudo apt-get install libjson-xs-perl
or in RPM like system using:
sudo yum install perl-JSON-XS
pgBadger has a TAP compatible test suite executed by `prove`:
$ prove
t/01_lint.t ......... ok
t/02_basics.t ....... ok
t/03_consistency.t .. ok
All tests successful.
Files=3, Tests=13, 6 wallclock secs ( 0.01 usr 0.01 sys + 5.31 cusr 0.16 csys = 5.49 CPU)
Result: PASS
$
or if you prefer to run test manually:
$ perl Makefile.PL && make test
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for pgBadger
Writing MYMETA.yml and MYMETA.json
cp pgbadger blib/script/pgbadger
"/usr/bin/perl" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/pgbadger
PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/01_lint.t ......... ok
t/02_basics.t ....... ok
t/03_consistency.t .. ok
All tests successful.
Files=3, Tests=13, 6 wallclock secs ( 0.03 usr 0.00 sys + 5.39 cusr 0.14 csys = 5.56 CPU)
Result: PASS
$ make clean && rm Makefile.old
Please contribute a regression test when you fix a bug or add a feature. Thanks!
pgbadger-13.1/LICENSE 0000664 0000000 0000000 00000001616 14765535576 0014251 0 ustar 00root root 0000000 0000000 Copyright (c) 2012-2025, Gilles Darold
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this
paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL Darold BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT,
SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS,
ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF
Darold HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Darold SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND Darold
HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS,
OR MODIFICATIONS.
pgbadger-13.1/MANIFEST 0000664 0000000 0000000 00000000121 14765535576 0014363 0 ustar 00root root 0000000 0000000 LICENSE
Makefile.PL
MANIFEST
META.yml
pgbadger
README
doc/pgBadger.pod
ChangeLog
pgbadger-13.1/META.yml 0000664 0000000 0000000 00000000372 14765535576 0014513 0 ustar 00root root 0000000 0000000 name: pgBadger
version: 13.1
version_from: pgbadger
installdirs: site
recommends:
Text::CSV_XS: 0
JSON::XS: 0
distribution_type: script
generated_by: ExtUtils::MakeMaker version 6.17
pgbadger-13.1/Makefile.PL 0000664 0000000 0000000 00000004331 14765535576 0015213 0 ustar 00root root 0000000 0000000 use ExtUtils::MakeMaker;
# See lib/ExtUtils/MakeMaker.pm for details of how to influence
# the contents of the Makefile that is written.
use strict;
my @ALLOWED_ARGS = ('INSTALLDIRS','DESTDIR');
# Parse command line arguments and store them as environment variables
while ($_ = shift) {
my ($k,$v) = split(/=/, $_, 2);
if (grep(/^$k$/, @ALLOWED_ARGS)) {
$ENV{$k} = $v;
}
}
$ENV{DESTDIR} =~ s/\/$//;
# Default install path
my $DESTDIR = $ENV{DESTDIR} || '';
my $INSTALLDIRS = $ENV{INSTALLDIRS} || 'site';
my %merge_compat = ();
if ($ExtUtils::MakeMaker::VERSION >= 6.46) {
%merge_compat = (
'META_MERGE' => {
resources => {
homepage => 'http://pgbadger.darold.net/',
repository => {
type => 'git',
git => 'git@github.com:darold/pgbadger.git',
web => 'https://github.com/darold/pgbadger',
},
},
}
);
}
sub MY::postamble {
return <<'EOMAKE';
USE_MARKDOWN=$(shell which pod2markdown)
README: doc/pgBadger.pod
pod2text $^ > $@
ifneq ("$(USE_MARKDOWN)", "")
cat doc/pgBadger.pod | grep "=head1 " | sed 's/^=head1 \(.*\)/- [\1](#\1)/' | sed 's/ /-/g' | sed 's/--/- /' > $@.md
sed -i '1s/^/### TABLE OF CONTENTS\n\n/' $@.md
echo >> $@.md
pod2markdown $^ | sed 's/^## /#### /' | sed 's/^# /### /' >> $@.md
else
$(warning You must install pod2markdown to generate README.md from doc/pgBadger.pod)
endif
.INTERMEDIATE: doc/synopsis.pod
doc/synopsis.pod: Makefile pgbadger
echo "=head1 SYNOPSIS" > $@
./pgbadger --help >> $@
echo "=head1 DESCRIPTION" >> $@
sed -i.bak 's/ +$$//g' $@
rm $@.bak
.PHONY: doc/pgBadger.pod
doc/pgBadger.pod: doc/synopsis.pod Makefile
sed -i.bak '/^=head1 SYNOPSIS/,/^=head1 DESCRIPTION/d' $@
sed -i.bak '4r $<' $@
rm $@.bak
EOMAKE
}
WriteMakefile(
'DISTNAME' => 'pgbadger',
'NAME' => 'pgBadger',
'VERSION_FROM' => 'pgbadger',
'dist' => {
'COMPRESS'=>'gzip -9f', 'SUFFIX' => 'gz',
'ZIP'=>'/usr/bin/zip','ZIPFLAGS'=>'-rl'
},
'AUTHOR' => 'Gilles Darold (gilles@darold.net)',
'ABSTRACT' => 'pgBadger - PostgreSQL log analysis report',
'EXE_FILES' => [ qw(pgbadger) ],
'MAN1PODS' => { 'doc/pgBadger.pod' => 'blib/man1/pgbadger.1p' },
'DESTDIR' => $DESTDIR,
'INSTALLDIRS' => $INSTALLDIRS,
'clean' => {},
%merge_compat
);
pgbadger-13.1/README 0000664 0000000 0000000 00000117776 14765535576 0014143 0 ustar 00root root 0000000 0000000 NAME
pgBadger - a fast PostgreSQL log analysis report
SYNOPSIS
Usage: pgbadger [options] logfile [...]
PostgreSQL log analyzer with fully detailed reports and graphs.
Arguments:
logfile can be a single log file, a list of files, or a shell command
returning a list of files. If you want to pass log content from stdin
use - as filename. Note that input from stdin will not work with csvlog.
Options:
-a | --average minutes : number of minutes to build the average graphs of
queries and connections. Default 5 minutes.
-A | --histo-average min: number of minutes to build the histogram graphs
of queries. Default 60 minutes.
-b | --begin datetime : start date/time for the data to be parsed in log
(either a timestamp or a time)
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
-D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log
(either a timestamp or a time)
-E | --explode : explode the main report by generating one report
per database. Global information not related to a
database is added to the postgres database report.
-f | --format logtype : possible values: syslog, syslog2, stderr, jsonlog,
csv, pgbouncer, logplex, rds and redshift. Use this
option when pgBadger is not able to detect the log
format.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-H | --html-outdir path: path to directory where HTML report must be written
in incremental mode, binary files stay on directory
defined with -O, --outdir option.
-i | --ident name : programname used as syslog ident. Default: postgres
-I | --incremental : use incremental mode, reports will be generated by
days in a separate directory, --outdir must be set.
-j | --jobs number : number of jobs to run at same time for a single log
file. Run as single by default or when working with
csvlog format.
-J | --Jobs number : number of log files to parse in parallel. Process
one file at a time by default.
-l | --last-parsed file: allow incremental log parsing by registering the
last datetime and line parsed. Useful if you want
to watch errors since last run or if you want one
report per day with a log rotated each week.
-L | --logfile-list file:file containing a list of log files to parse.
-m | --maxlength size : maximum length of a query, it will be restricted to
the given size. Default truncate size is 100000.
-M | --no-multiline : do not collect multiline statements to avoid garbage
especially on errors that generate a huge report.
-N | --appname name : only report on entries for given application name
-o | --outfile filename: define the filename for the output. Default depends
on the output format: out.html, out.txt, out.bin,
or out.json. This option can be used multiple times
to output several formats. To use json output, the
Perl module JSON::XS must be installed, to dump
output to stdout, use - as filename.
-O | --outdir path : directory where out files must be saved.
-p | --prefix string : the value of your custom log_line_prefix as
defined in your postgresql.conf. Only use it if you
aren't using one of the standard prefixes specified
in the pgBadger documentation, such as if your
prefix includes additional variables like client IP
or application name. MUST contain escape sequences
for time (%t, %m or %n) and processes (%p or %c).
See examples below.
-P | --no-prettify : disable SQL queries prettify formatter.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-Q | --query-numbering : add numbering of queries to the output when using
options --dump-all-queries or --normalized-only.
-r | --remote-host ip : set the host where to execute the cat command on
remote log file to parse the file locally.
-R | --retention N : number of weeks to keep in incremental mode. Defaults
to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directories are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-S | --select-only : only report SELECT queries.
-t | --top number : number of queries to store/display. Default: 20.
-T | --title string : change title of the HTML page report.
-u | --dbuser username : only report on entries for the given user.
-U | --exclude-user username : exclude entries for the specified user from
report. Can be used multiple time.
-v | --verbose : enable verbose or debug mode. Disabled by default.
-V | --version : show pgBadger version and exit.
-w | --watch-mode : only report errors just like logwatch could do.
-W | --wide-char : encode html output of queries into UTF8 to avoid
Perl message "Wide character in print".
-x | --extension : output format. Values: text, html, bin or json.
Default: html
-X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat, bzcat or unzip is not in your path.
-Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
Use this to adjust date/time in JavaScript graphs.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-file filename: path of the file that contains each regex to use
to exclude queries from the report. One regex per
line.
--include-query regex : any query that does not match the given regex will
be excluded from the report. You can use this
option multiple times. For example: "(tbl1|tbl2)".
--include-file filename: path of the file that contains each regex to the
queries to include from the report. One regex per
line.
--disable-error : do not generate error report.
--disable-hourly : do not generate hourly report.
--disable-type : do not generate report of queries by type, database
or user.
--disable-query : do not generate query reports (slowest, most
frequent, queries by users, by database, ...).
--disable-session : do not generate session report.
--disable-connection : do not generate connection report.
--disable-lock : do not generate lock report.
--disable-temporary : do not generate temporary report.
--disable-checkpoint : do not generate checkpoint/restartpoint report.
--disable-autovacuum : do not generate autovacuum report.
--charset : used to set the HTML charset to be used.
Default: utf-8.
--csv-separator : used to set the CSV field separator, default: ,
--exclude-time regex : any timestamp matching the given regex will be
excluded from the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--include-time regex : only timestamps matching the given regex will be
included in the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--exclude-db name : exclude entries for the specified database from
report. Example: "pg_dump". Can be used multiple
times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump". Can be used
multiple times.
--exclude-line regex : exclude any log entry that will match the given
regex. Can be used multiple times.
--exclude-client name : exclude log entries for the specified client ip.
Can be used multiple times.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
--noreport : no reports will be created in incremental mode.
--log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add an md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir path : set the path where the pid file must be stored.
Default /tmp
--pid-file file : set the name of the pid file to manage concurrent
execution of pgBadger. Default: pgbadger.pid
--rebuild : used to rebuild all html reports in incremental
output directories where there's binary data files.
--pgbouncer-only : only show PgBouncer-related menus in the header.
--start-monday : in incremental mode, calendar weeks start on
Sunday. Use this option to start on a Monday.
--iso-week-number : in incremental mode, calendar weeks start on
Monday and respect the ISO 8601 week number, range
01 to 53, where week 1 is the first week that has
at least 4 days in the new year.
--normalized-only : only dump all normalized queries to out.txt
--log-timezone +/-XX : Set the number of hours from GMT of the timezone
that must be used to adjust date/time read from
log file before beeing parsed. Using this option
makes log search with a date/time more difficult.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--prettify-json : use it if you want json output to be prettified.
--month-report YYYY-MM : create a cumulative HTML report over the specified
month. Requires incremental output directories and
the presence of all necessary binary data files
--day-report YYYY-MM-DD: create an HTML report over the specified day.
Requires incremental output directories and the
presence of all necessary binary data files
--noexplain : do not process lines generated by auto_explain.
--command CMD : command to execute to retrieve log entries on
stdin. pgBadger will open a pipe to the command
and parse log entries generated by the command.
--no-week : inform pgbadger to not build weekly reports in
incremental mode. Useful if it takes too much time.
--explain-url URL : use it to override the url of the graphical explain
tool. Default: https://explain.depesz.com/
--tempdir DIR : set directory where temporary files will be written
Default: File::Spec->tmpdir() || '/tmp'
--no-process-info : disable changing process title to help identify
pgbadger process, some system do not support it.
--dump-all-queries : dump all queries found in the log file replacing
bind parameters included in the queries at their
respective placeholders positions.
--keep-comments : do not remove comments from normalized queries. It
can be useful if you want to distinguish between
same normalized queries.
--no-progressbar : disable progressbar.
--dump-raw-csv : parse the log and dump the information into CSV
format. No further processing is done, no report.
--include-pid PID : only report events related to the session pid (%p).
Can be used multiple time.
--include-session ID : only report events related to the session id (%c).
Can be used multiple time.
--histogram-query VAL : use custom inbound for query times histogram.
Default inbound in milliseconds:
0,1,5,10,25,50,100,500,1000,10000
--histogram-session VAL : use custom inbound for session times histogram.
Default inbound in milliseconds:
0,500,1000,30000,60000,600000,1800000,3600000,28800000
--no-fork : do not fork any process, for debugging purpose.
pgBadger is able to parse a remote log file using a passwordless ssh
connection. Use -r or --remote-host to set the host IP address or
hostname. There are also some additional options to fully control the
ssh connection.
--ssh-program ssh path to the ssh program to use. Default: ssh.
--ssh-port port ssh port to use for the connection. Default: 22.
--ssh-user username connection login name. Defaults to running user.
--ssh-identity file path to the identity file to use.
--ssh-timeout second timeout to ssh connection failure. Default: 10 sec.
--ssh-option options list of -o options to use for the ssh connection.
Options always used:
-o ConnectTimeout=$ssh_timeout
-o PreferredAuthentications=hostbased,publickey
Log file to parse can also be specified using an URI, supported
protocols are http[s] and [s]ftp. The curl command will be used to
download the file, and the file will be parsed during download. The ssh
protocol is also supported and will use the ssh command like with the
remote host use. See examples bellow.
Return codes:
0: on success
1: die on error
2: if it has been interrupted using ctr+c for example
3: the pid file already exists or can not be created
4: no log file was given at command line
Examples:
pgbadger /var/log/postgresql.log
pgbadger /var/log/postgres.log.2.gz /var/log/postgres.log.1.gz /var/log/postgres.log
pgbadger /var/log/postgresql/postgresql-2012-05-*
pgbadger --exclude-query="^(COPY|COMMIT)" /var/log/postgresql.log
pgbadger -b "2012-06-25 10:56:11" -e "2012-06-25 10:59:11" /var/log/postgresql.log
cat /var/log/postgres.log | pgbadger -
# Log line prefix with stderr log output
pgbadger --prefix '%t [%p]: user=%u,db=%d,client=%h' /pglog/postgresql-2012-08-21*
pgbadger --prefix '%m %u@%d %p %r %a : ' /pglog/postgresql.log
# Log line prefix with syslog log output
pgbadger --prefix 'user=%u,db=%d,client=%h,appname=%a' /pglog/postgresql-2012-08-21*
# Use my 8 CPUs to parse my 10GB file faster, much faster
pgbadger -j 8 /pglog/postgresql-10.1-main.log
Use URI notation for remote log file:
pgbadger http://172.12.110.1//var/log/postgresql/postgresql-10.1-main.log
pgbadger ftp://username@172.12.110.14/postgresql-10.1-main.log
pgbadger ssh://username@172.12.110.14:2222//var/log/postgresql/postgresql-10.1-main.log*
You can use together a local PostgreSQL log and a remote pgbouncer log
file to parse:
pgbadger /var/log/postgresql/postgresql-10.1-main.log ssh://username@172.12.110.14/pgbouncer.log
Reporting errors every week by cron job:
30 23 * * 1 /usr/bin/pgbadger -q -w /var/log/postgresql.log -o /var/reports/pg_errors.html
Generate report every week using incremental behavior:
0 4 * * 1 /usr/bin/pgbadger -q `find /var/log/ -mtime -7 -name "postgresql.log*"` -o /var/reports/pg_errors-`date +\%F`.html -l /var/reports/pgbadger_incremental_file.dat
This supposes that your log file and HTML report are also rotated every
week.
Or better, use the auto-generated incremental reports:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
will generate a report per day and per week.
In incremental mode, you can also specify the number of weeks to keep in
the reports:
/usr/bin/pgbadger --retention 2 -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour,
you can use pgBadger as follow to exclude these periods from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
This will help avoid having COPY statements, as generated by pg_dump, on
top of the list of slowest queries. You can also use --exclude-appname
"pg_dump" to solve this problem in a simpler way.
You can also parse journalctl output just as if it was a log file:
pgbadger --journalctl 'journalctl -u postgresql-9.5'
or worst, call it from a remote host:
pgbadger -r 192.168.1.159 --journalctl 'journalctl -u postgresql-9.5'
you don't need to specify any log file at command line, but if you have
other PostgreSQL log files to parse, you can add them as usual.
To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
pgBadger also supports Heroku PostgreSQL logs using logplex format:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
this will stream Heroku PostgreSQL log to pgbadger through stdin.
pgBadger can auto detect RDS and cloudwatch PostgreSQL logs using rds
format:
pgbadger -f rds -o rds_out.html rds.log
Each CloudSQL Postgresql log is a fairly normal PostgreSQL log, but
encapsulated in JSON format. It is autodetected by pgBadger but in case
you need to force the log format use `jsonlog`:
pgbadger -f jsonlog -o cloudsql_out.html cloudsql.log
This is the same as with the jsonlog extension, the json format is
different but pgBadger can parse both formats.
pgBadger also supports logs produced by CloudNativePG Postgres operator
for Kubernetes:
pgbadger -f jsonlog -o cnpg_out.html cnpg.log
To create a cumulative report over a month use command:
pgbadger --month-report 2919-05 /path/to/incremental/reports/
this will add a link to the month name into the calendar view in
incremental reports to look at report for month 2019 May. Use -E or
--explode if the reports were built using this option.
DESCRIPTION
pgBadger is a PostgreSQL log analyzer built for speed providing fully
detailed reports based on your PostgreSQL log files. It's a small
standalone Perl script that outperforms any other PostgreSQL log
analyzer.
It is written in pure Perl and uses a JavaScript library (flotr2) to
draw graphs so that you don't need to install any additional Perl
modules or other packages. Furthermore, this library gives us more
features such as zooming. pgBadger also uses the Bootstrap JavaScript
library and the FontAwesome webfont for better design. Everything is
embedded.
pgBadger is able to autodetect your log file format (syslog, stderr,
csvlog or jsonlog) if the file is long enough. It is designed to parse
huge log files as well as compressed files. Supported compressed formats
are gzip, bzip2, lz4, xz, zip and zstd. For the xz format you must have
an xz version higher than 5.05 that supports the --robot option. lz4
files must be compressed with the --content-size option for pgbadger to
determine the uncompressed file size. For the complete list of features,
see below.
All charts are zoomable and can be saved as PNG images.
You can also limit pgBadger to only report errors or remove any part of
the report using command-line options.
pgBadger supports any custom format set in the log_line_prefix directive
of your postgresql.conf file as long as it at least specifies the %t and
%p patterns.
pgBadger allows parallel processing of a single log file or multiple
files through the use of the -j option specifying the number of CPUs.
If you want to save system performance you can also use log_duration
instead of log_min_duration_statement to have reports on duration and
number of queries only.
FEATURE
pgBadger reports everything about your SQL queries:
Overall statistics.
The most frequent waiting queries.
Queries that waited the most.
Queries generating the most temporary files.
Queries generating the largest temporary files.
The slowest queries.
Queries that took up the most time.
The most frequent queries.
The most frequent errors.
Histogram of query times.
Histogram of sessions times.
Users involved in top queries.
Applications involved in top queries.
Queries generating the most cancellation.
Queries most cancelled.
The most time consuming prepare/bind queries
The following reports are also available with hourly charts divided into
periods of five minutes:
SQL queries statistics.
Temporary file statistics.
Checkpoints statistics.
Autovacuum and autoanalyze statistics.
Cancelled queries.
Error events (panic, fatal, error and warning).
Error class distribution.
There are also some pie charts about distribution of:
Locks statistics.
Queries by type (select/insert/update/delete).
Distribution of queries type per database/application
Sessions per database/user/client/application.
Connections per database/user/client/application.
Autovacuum and autoanalyze per table.
Queries per user and total duration per user.
All charts are zoomable and can be saved as PNG images. SQL queries
reported are highlighted and beautified automatically.
pgBadger is also able to parse PgBouncer log files and to create the
following reports:
Request Throughput
Bytes I/O Throughput
Average Query Duration
Simultaneous sessions
Histogram of sessions times
Sessions per database
Sessions per user
Sessions per host
Established connections
Connections per database
Connections per user
Connections per host
Most used reserved pools
Most Frequent Errors/Events
You can also have incremental reports with one report per day and a
cumulative report per week. Two multiprocess modes are available to
speed up log parsing, one using one core per log file, and the second
using multiple cores to parse a single file. These modes can be
combined.
Histogram granularity can be adjusted using the -A command-line option.
By default, they will report the mean of each top queries/errors
occurring per hour, but you can specify the granularity down to the
minute.
pgBadger can also be used in a central place to parse remote log files
using a passwordless SSH connection. This mode can be used with
compressed files and in the multiprocess per file mode (-J), but cannot
be used with the CSV log format.
Examples of reports can be found here:
https://pgbadger.darold.net/#reports
REQUIREMENT
pgBadger comes as a single Perl script - you do not need anything other
than a modern Perl distribution. Charts are rendered using a JavaScript
library, so you don't need anything other than a web browser. Your
browser will do all the work.
If you plan to parse PostgreSQL CSV log files, you might need some Perl
Modules:
Text::CSV_XS - to parse PostgreSQL CSV log files.
This module is optional, if you don't have PostgreSQL log in the CSV
format, you don't need to install it.
If you want to export statistics as JSON file, you need an additional
Perl module:
JSON::XS - JSON serialising/deserialising, done correctly and fast
This module is optional, if you don't select the json output format, you
don't need to install it. You can install it on a Debian-like system
using:
sudo apt-get install libjson-xs-perl
and on RPM-like system using:
sudo yum install perl-JSON-XS
Compressed log file format is autodetected from the file extension. If
pgBadger finds a gz extension, it will use the zcat utility; with a bz2
extension, it will use bzcat; with lz4, it will use lz4cat; with zst, it
will use zstdcat; if the file extension is zip or xz, then the unzip or
xz utility will be used.
If those utilities are not found in the PATH environment variable, then
use the --zcat command-line option to change this path. For example:
--zcat="/usr/local/bin/gunzip -c" or --zcat="/usr/local/bin/bzip2 -dc"
--zcat="C:\tools\unzip -p"
By default, pgBadger will use the zcat, bzcat, lz4cat, zstdcat and unzip
utilities following the file extension. If you use the default
autodetection of compression format, you can mix gz, bz2, lz4, xz, zip
or zstd files. Specifying a custom value of --zcat option will remove
the possibility of mixed compression format.
Note that multiprocessing cannot be used with compressed files or CSV
files as well as under Windows platform.
INSTALLATION
Download the tarball from GitHub and unpack the archive as follow:
tar xzf pgbadger-11.x.tar.gz
cd pgbadger-11.x/
perl Makefile.PL
make && sudo make install
This will copy the Perl script pgbadger to /usr/local/bin/pgbadger by
default and the man page into /usr/local/share/man/man1/pgbadger.1.
Those are the default installation directories for 'site' install.
If you want to install all under /usr/ location, use INSTALLDIRS='perl'
as an argument of Makefile.PL. The script will be installed into
/usr/bin/pgbadger and the manpage into /usr/share/man/man1/pgbadger.1.
For example, to install everything just like Debian does, proceed as
follows:
perl Makefile.PL INSTALLDIRS=vendor
By default, INSTALLDIRS is set to site.
POSTGRESQL CONFIGURATION
You must enable and set some configuration directives in your
postgresql.conf before starting.
You must first enable SQL query logging to have something to parse:
log_min_duration_statement = 0
Here every statement will be logged, on a busy server you may want to
increase this value to only log queries with a longer duration. Note
that if you have log_statement set to 'all', nothing will be logged
through the log_min_duration_statement directive. See the next chapter
for more information.
pgBadger supports any custom format set in the log_line_prefix directive
of your postgresql.conf file as long as it at least specifies a time
escape sequence (%t, %m or %n) and a process-related escape sequence (%p
or %c).
For example, with 'stderr' log format, log_line_prefix must be at least:
log_line_prefix = '%t [%p]: '
Log line prefix could add user, database name, application name and
client ip address as follows:
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
or for syslog log file format:
log_line_prefix = 'user=%u,db=%d,app=%a,client=%h '
Log line prefix for stderr output could also be:
log_line_prefix = '%t [%p]: db=%d,user=%u,app=%a,client=%h '
or for syslog output:
log_line_prefix = 'db=%d,user=%u,app=%a,client=%h '
You need to enable other parameters in postgresql.conf to get more
information from your log files:
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
Do not enable log_statement as its log format will not be parsed by
pgBadger.
Of course your log messages should be in English with or without locale
support:
lc_messages='en_US.UTF-8'
lc_messages='C'
pgBadger parser does not support other locales, like 'fr_FR.UTF-8' for
example.
LOG STATEMENTS
Considerations about log_min_duration_statement, log_duration and
log_statement configuration directives.
If you want the query statistics to include the actual query strings,
you must set log_min_duration_statement to 0 or more milliseconds.
If you just want to report duration and number of queries and don't want
all details about queries, set log_min_duration_statement to -1 to
disable it and enable log_duration in your postgresql.conf file. If you
want to add the most common query report, you can either choose to set
log_min_duration_statement to a higher value or to enable log_statement.
Enabling log_min_duration_statement will add reports about slowest
queries and queries that took up the most time. Take care that if you
have log_statement set to 'all', nothing will be logged with
log_min_duration_statement.
Warning: Do not enable both log_min_duration_statement, log_duration and
log_statement all together, this will result in wrong counter values.
Note that this will also increase drastically the size of your log.
log_min_duration_statement should always be preferred.
PARALLEL PROCESSING
To enable parallel processing you just have to use the -j N option where
N is the number of cores you want to use.
pgBadger will then proceed as follow:
for each log file
chunk size = int(file size / N)
look at start/end offsets of these chunks
fork N processes and seek to the start offset of each chunk
each process will terminate when the parser reach the end offset
of its chunk
each process write stats into a binary temporary file
wait for all children processes to terminate
All binary temporary files generated will then be read and loaded into
memory to build the html output.
With that method, at start/end of chunks pgBadger may truncate or omit a
maximum of N queries per log file, which is an insignificant gap if you
have millions of queries in your log file. The chance that the query
that you were looking for is lost is near 0, this is why I think this
gap is livable. Most of the time the query is counted twice but
truncated.
When you have many small log files and many CPUs, it is speedier to
dedicate one core to one log file at a time. To enable this behavior,
you have to use option -J N instead. With 200 log files of 10MB each,
the use of the -J option starts being really interesting with 8 cores.
Using this method you will be sure not to lose any queries in the
reports.
Here is a benchmark done on a server with 8 CPUs and a single file of
9.5GB.
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+---------+-------+-------+------
-j | 1h41m18 | 50m25 | 25m39 | 15m58
-J | 1h41m18 | 54m28 | 41m16 | 34m45
With 200 log files of 10MB each, so 2GB in total, the results are
slightly different:
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+-------+-------+-------+------
-j | 20m15 | 9m56 | 5m20 | 4m20
-J | 20m15 | 9m49 | 5m00 | 2m40
So it is recommended to use -j unless you have hundreds of small log
files and can use at least 8 CPUs.
IMPORTANT: when you are using parallel parsing, pgBadger will generate a
lot of temporary files in the /tmp directory and will remove them at the
end, so do not remove those files unless pgBadger is not running. They
are all named with the following template tmp_pgbadgerXXXX.bin so they
can be easily identified.
INCREMENTAL REPORTS
pgBadger includes an automatic incremental report mode using option -I
or --incremental. When running in this mode, pgBadger will generate one
report per day and a cumulative report per week. Output is first done in
binary format into the mandatory output directory (see option -O or
--outdir), then in HTML format for daily and weekly reports with a main
index file.
The main index file will show a dropdown menu per week with a link to
each week report and links to daily reports of each week.
For example, if you run pgBadger as follows based on a daily rotated
file:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
you will have all daily and weekly reports for the full running period.
In this mode, pgBadger will create an automatic incremental file in the
output directory, so you don't have to use the -l option unless you want
to change the path of that file. This means that you can run pgBadger in
this mode each day on a log file rotated each week, and it will not
count the log entries twice.
To save disk space, you may want to use the -X or --extra-files
command-line option to force pgBadger to write JavaScript and CSS to
separate files in the output directory. The resources will then be
loaded using script and link tags.
Rebuilding reports
Incremental reports can be rebuilt after a pgbadger report fix or a new
feature to update all HTML reports. To rebuild all reports where a
binary file is still present, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
Monthly reports
By default, pgBadger in incremental mode only computes daily and weekly
reports. If you want monthly cumulative reports, you will have to use a
separate command to specify the report to build. For example, to build a
report for August 2019:
pgbadger -X --month-report 2019-08 /var/www/pg_reports/
this will add a link to the month name into the calendar view of
incremental reports to look at monthly report. The report for a current
month can be run every day, it is entirely rebuilt each time. The
monthly report is not built by default because it could take a lot of
time following the amount of data.
If reports were built with the per-database option ( -E | --explode ),
it must be used too when calling pgbadger to build monthly report:
pgbadger -E -X --month-report 2019-08 /var/www/pg_reports/
This is the same when using the rebuild option ( -R | --rebuild ).
BINARY FORMAT
Using the binary format it is possible to create custom incremental and
cumulative reports. For example, if you want to refresh a pgBadger
report each hour from a daily PostgreSQL log file, you can proceed by
running the following commands each hour:
pgbadger --last-parsed .pgbadger_last_state_file -o sunday/hourX.bin /var/log/pgsql/postgresql-Sun.log
to generate the incremental data files in binary format. And to generate
the fresh HTML report from that binary file:
pgbadger sunday/*.bin
Or as another example, if you generate one log file per hour and you
want reports to be rebuilt each time the log file is rotated, proceed as
follows:
pgbadger -o day1/hour01.bin /var/log/pgsql/pglog/postgresql-2012-03-23_10.log
pgbadger -o day1/hour02.bin /var/log/pgsql/pglog/postgresql-2012-03-23_11.log
pgbadger -o day1/hour03.bin /var/log/pgsql/pglog/postgresql-2012-03-23_12.log
...
When you want to refresh the HTML report, for example, each time after a
new binary file is generated, just do the following:
pgbadger -o day1_report.html day1/*.bin
Adjust the commands to suit your particular needs.
JSON FORMAT
JSON format is good for sharing data with other languages, which makes
it easy to integrate pgBadger result into other monitoring tools, like
Cacti or Graphite.
AUTHORS
pgBadger is an original work from Gilles Darold.
The pgBadger logo is an original creation of Damien Cazeils.
The pgBadger v4.x design comes from the "Art is code" company.
This web site is a work of Gilles Darold.
pgBadger is maintained by Gilles Darold and everyone who wants to
contribute.
Many people have contributed to pgBadger, they are all quoted in the
Changelog file.
LICENSE
pgBadger is free software distributed under the PostgreSQL Licence.
Copyright (c) 2012-2025, Gilles Darold
A modified version of the SQL::Beautify Perl Module is embedded in
pgBadger with copyright (C) 2009 by Jonas Kramer and is published under
the terms of the Artistic License 2.0.
pgbadger-13.1/README.md 0000664 0000000 0000000 00000114402 14765535576 0014521 0 ustar 00root root 0000000 0000000 ### TABLE OF CONTENTS
- [NAME](#NAME)
- [SYNOPSIS](#SYNOPSIS)
- [DESCRIPTION](#DESCRIPTION)
- [FEATURE](#FEATURE)
- [REQUIREMENT](#REQUIREMENT)
- [INSTALLATION](#INSTALLATION)
- [POSTGRESQL-CONFIGURATION](#POSTGRESQL-CONFIGURATION)
- [LOG-STATEMENTS](#LOG-STATEMENTS)
- [PARALLEL-PROCESSING](#PARALLEL-PROCESSING)
- [INCREMENTAL-REPORTS](#INCREMENTAL-REPORTS)
- [BINARY-FORMAT](#BINARY-FORMAT)
- [JSON-FORMAT](#JSON-FORMAT)
- [AUTHORS](#AUTHORS)
- [LICENSE](#LICENSE)
### NAME
pgBadger - a fast PostgreSQL log analysis report
### SYNOPSIS
Usage: pgbadger \[options\] logfile \[...\]
PostgreSQL log analyzer with fully detailed reports and graphs.
Arguments:
logfile can be a single log file, a list of files, or a shell command
returning a list of files. If you want to pass log content from stdin
use - as filename. Note that input from stdin will not work with csvlog.
Options:
-a | --average minutes : number of minutes to build the average graphs of
queries and connections. Default 5 minutes.
-A | --histo-average min: number of minutes to build the histogram graphs
of queries. Default 60 minutes.
-b | --begin datetime : start date/time for the data to be parsed in log
(either a timestamp or a time)
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
-D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log
(either a timestamp or a time)
-E | --explode : explode the main report by generating one report
per database. Global information not related to a
database is added to the postgres database report.
-f | --format logtype : possible values: syslog, syslog2, stderr, jsonlog,
csv, pgbouncer, logplex, rds and redshift. Use this
option when pgBadger is not able to detect the log
format.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-H | --html-outdir path: path to directory where HTML report must be written
in incremental mode, binary files stay on directory
defined with -O, --outdir option.
-i | --ident name : programname used as syslog ident. Default: postgres
-I | --incremental : use incremental mode, reports will be generated by
days in a separate directory, --outdir must be set.
-j | --jobs number : number of jobs to run at same time for a single log
file. Run as single by default or when working with
csvlog format.
-J | --Jobs number : number of log files to parse in parallel. Process
one file at a time by default.
-l | --last-parsed file: allow incremental log parsing by registering the
last datetime and line parsed. Useful if you want
to watch errors since last run or if you want one
report per day with a log rotated each week.
-L | --logfile-list file:file containing a list of log files to parse.
-m | --maxlength size : maximum length of a query, it will be restricted to
the given size. Default truncate size is 100000.
-M | --no-multiline : do not collect multiline statements to avoid garbage
especially on errors that generate a huge report.
-N | --appname name : only report on entries for given application name
-o | --outfile filename: define the filename for the output. Default depends
on the output format: out.html, out.txt, out.bin,
or out.json. This option can be used multiple times
to output several formats. To use json output, the
Perl module JSON::XS must be installed, to dump
output to stdout, use - as filename.
-O | --outdir path : directory where out files must be saved.
-p | --prefix string : the value of your custom log_line_prefix as
defined in your postgresql.conf. Only use it if you
aren't using one of the standard prefixes specified
in the pgBadger documentation, such as if your
prefix includes additional variables like client IP
or application name. MUST contain escape sequences
for time (%t, %m or %n) and processes (%p or %c).
See examples below.
-P | --no-prettify : disable SQL queries prettify formatter.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-Q | --query-numbering : add numbering of queries to the output when using
options --dump-all-queries or --normalized-only.
-r | --remote-host ip : set the host where to execute the cat command on
remote log file to parse the file locally.
-R | --retention N : number of weeks to keep in incremental mode. Defaults
to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directories are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-S | --select-only : only report SELECT queries.
-t | --top number : number of queries to store/display. Default: 20.
-T | --title string : change title of the HTML page report.
-u | --dbuser username : only report on entries for the given user.
-U | --exclude-user username : exclude entries for the specified user from
report. Can be used multiple time.
-v | --verbose : enable verbose or debug mode. Disabled by default.
-V | --version : show pgBadger version and exit.
-w | --watch-mode : only report errors just like logwatch could do.
-W | --wide-char : encode html output of queries into UTF8 to avoid
Perl message "Wide character in print".
-x | --extension : output format. Values: text, html, bin or json.
Default: html
-X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat, bzcat or unzip is not in your path.
-Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
Use this to adjust date/time in JavaScript graphs.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-file filename: path of the file that contains each regex to use
to exclude queries from the report. One regex per
line.
--include-query regex : any query that does not match the given regex will
be excluded from the report. You can use this
option multiple times. For example: "(tbl1|tbl2)".
--include-file filename: path of the file that contains each regex to the
queries to include from the report. One regex per
line.
--disable-error : do not generate error report.
--disable-hourly : do not generate hourly report.
--disable-type : do not generate report of queries by type, database
or user.
--disable-query : do not generate query reports (slowest, most
frequent, queries by users, by database, ...).
--disable-session : do not generate session report.
--disable-connection : do not generate connection report.
--disable-lock : do not generate lock report.
--disable-temporary : do not generate temporary report.
--disable-checkpoint : do not generate checkpoint/restartpoint report.
--disable-autovacuum : do not generate autovacuum report.
--charset : used to set the HTML charset to be used.
Default: utf-8.
--csv-separator : used to set the CSV field separator, default: ,
--exclude-time regex : any timestamp matching the given regex will be
excluded from the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--include-time regex : only timestamps matching the given regex will be
included in the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--exclude-db name : exclude entries for the specified database from
report. Example: "pg_dump". Can be used multiple
times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump". Can be used
multiple times.
--exclude-line regex : exclude any log entry that will match the given
regex. Can be used multiple times.
--exclude-client name : exclude log entries for the specified client ip.
Can be used multiple times.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
--noreport : no reports will be created in incremental mode.
--log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add an md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir path : set the path where the pid file must be stored.
Default /tmp
--pid-file file : set the name of the pid file to manage concurrent
execution of pgBadger. Default: pgbadger.pid
--rebuild : used to rebuild all html reports in incremental
output directories where there's binary data files.
--pgbouncer-only : only show PgBouncer-related menus in the header.
--start-monday : in incremental mode, calendar weeks start on
Sunday. Use this option to start on a Monday.
--iso-week-number : in incremental mode, calendar weeks start on
Monday and respect the ISO 8601 week number, range
01 to 53, where week 1 is the first week that has
at least 4 days in the new year.
--normalized-only : only dump all normalized queries to out.txt
--log-timezone +/-XX : Set the number of hours from GMT of the timezone
that must be used to adjust date/time read from
log file before beeing parsed. Using this option
makes log search with a date/time more difficult.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--prettify-json : use it if you want json output to be prettified.
--month-report YYYY-MM : create a cumulative HTML report over the specified
month. Requires incremental output directories and
the presence of all necessary binary data files
--day-report YYYY-MM-DD: create an HTML report over the specified day.
Requires incremental output directories and the
presence of all necessary binary data files
--noexplain : do not process lines generated by auto_explain.
--command CMD : command to execute to retrieve log entries on
stdin. pgBadger will open a pipe to the command
and parse log entries generated by the command.
--no-week : inform pgbadger to not build weekly reports in
incremental mode. Useful if it takes too much time.
--explain-url URL : use it to override the url of the graphical explain
tool. Default: https://explain.depesz.com/
--tempdir DIR : set directory where temporary files will be written
Default: File::Spec->tmpdir() || '/tmp'
--no-process-info : disable changing process title to help identify
pgbadger process, some system do not support it.
--dump-all-queries : dump all queries found in the log file replacing
bind parameters included in the queries at their
respective placeholders positions.
--keep-comments : do not remove comments from normalized queries. It
can be useful if you want to distinguish between
same normalized queries.
--no-progressbar : disable progressbar.
--dump-raw-csv : parse the log and dump the information into CSV
format. No further processing is done, no report.
--include-pid PID : only report events related to the session pid (%p).
Can be used multiple time.
--include-session ID : only report events related to the session id (%c).
Can be used multiple time.
--histogram-query VAL : use custom inbound for query times histogram.
Default inbound in milliseconds:
0,1,5,10,25,50,100,500,1000,10000
--histogram-session VAL : use custom inbound for session times histogram.
Default inbound in milliseconds:
0,500,1000,30000,60000,600000,1800000,3600000,28800000
pgBadger is able to parse a remote log file using a passwordless ssh connection.
Use -r or --remote-host to set the host IP address or hostname. There are also
some additional options to fully control the ssh connection.
--ssh-program ssh path to the ssh program to use. Default: ssh.
--ssh-port port ssh port to use for the connection. Default: 22.
--ssh-user username connection login name. Defaults to running user.
--ssh-identity file path to the identity file to use.
--ssh-timeout second timeout to ssh connection failure. Default: 10 sec.
--ssh-option options list of -o options to use for the ssh connection.
Options always used:
-o ConnectTimeout=$ssh_timeout
-o PreferredAuthentications=hostbased,publickey
Log file to parse can also be specified using an URI, supported protocols are
http\[s\] and \[s\]ftp. The curl command will be used to download the file, and the
file will be parsed during download. The ssh protocol is also supported and will
use the ssh command like with the remote host use. See examples bellow.
Return codes:
0: on success
1: die on error
2: if it has been interrupted using ctr+c for example
3: the pid file already exists or can not be created
4: no log file was given at command line
Examples:
pgbadger /var/log/postgresql.log
pgbadger /var/log/postgres.log.2.gz /var/log/postgres.log.1.gz /var/log/postgres.log
pgbadger /var/log/postgresql/postgresql-2012-05-*
pgbadger --exclude-query="^(COPY|COMMIT)" /var/log/postgresql.log
pgbadger -b "2012-06-25 10:56:11" -e "2012-06-25 10:59:11" /var/log/postgresql.log
cat /var/log/postgres.log | pgbadger -
# Log line prefix with stderr log output
pgbadger --prefix '%t [%p]: user=%u,db=%d,client=%h' /pglog/postgresql-2012-08-21*
pgbadger --prefix '%m %u@%d %p %r %a : ' /pglog/postgresql.log
# Log line prefix with syslog log output
pgbadger --prefix 'user=%u,db=%d,client=%h,appname=%a' /pglog/postgresql-2012-08-21*
# Use my 8 CPUs to parse my 10GB file faster, much faster
pgbadger -j 8 /pglog/postgresql-10.1-main.log
Use URI notation for remote log file:
pgbadger http://172.12.110.1//var/log/postgresql/postgresql-10.1-main.log
pgbadger ftp://username@172.12.110.14/postgresql-10.1-main.log
pgbadger ssh://username@172.12.110.14:2222//var/log/postgresql/postgresql-10.1-main.log*
You can use together a local PostgreSQL log and a remote pgbouncer log file to parse:
pgbadger /var/log/postgresql/postgresql-10.1-main.log ssh://username@172.12.110.14/pgbouncer.log
Reporting errors every week by cron job:
30 23 * * 1 /usr/bin/pgbadger -q -w /var/log/postgresql.log -o /var/reports/pg_errors.html
Generate report every week using incremental behavior:
0 4 * * 1 /usr/bin/pgbadger -q `find /var/log/ -mtime -7 -name "postgresql.log*"` -o /var/reports/pg_errors-`date +\%F`.html -l /var/reports/pgbadger_incremental_file.dat
This supposes that your log file and HTML report are also rotated every week.
Or better, use the auto-generated incremental reports:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
will generate a report per day and per week.
In incremental mode, you can also specify the number of weeks to keep in the
reports:
/usr/bin/pgbadger --retention 2 -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
If you have a pg\_dump at 23:00 and 13:00 each day during half an hour, you can
use pgBadger as follow to exclude these periods from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
This will help avoid having COPY statements, as generated by pg\_dump, on top of
the list of slowest queries. You can also use --exclude-appname "pg\_dump" to
solve this problem in a simpler way.
You can also parse journalctl output just as if it was a log file:
pgbadger --journalctl 'journalctl -u postgresql-9.5'
or worst, call it from a remote host:
pgbadger -r 192.168.1.159 --journalctl 'journalctl -u postgresql-9.5'
you don't need to specify any log file at command line, but if you have other
PostgreSQL log files to parse, you can add them as usual.
To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
pgBadger also supports Heroku PostgreSQL logs using logplex format:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
this will stream Heroku PostgreSQL log to pgbadger through stdin.
pgBadger can auto detect RDS and cloudwatch PostgreSQL logs using
rds format:
pgbadger -f rds -o rds_out.html rds.log
Each CloudSQL Postgresql log is a fairly normal PostgreSQL log, but encapsulated
in JSON format. It is autodetected by pgBadger but in case you need to force
the log format use \`jsonlog\`:
pgbadger -f jsonlog -o cloudsql_out.html cloudsql.log
This is the same as with the jsonlog extension, the json format is different
but pgBadger can parse both formats.
pgBadger also supports logs produced by CloudNativePG Postgres operator for Kubernetes:
pgbadger -f jsonlog -o cnpg_out.html cnpg.log
To create a cumulative report over a month use command:
pgbadger --month-report 2919-05 /path/to/incremental/reports/
this will add a link to the month name into the calendar view in
incremental reports to look at report for month 2019 May.
Use -E or --explode if the reports were built using this option.
### DESCRIPTION
pgBadger is a PostgreSQL log analyzer built for speed providing fully
detailed reports based on your PostgreSQL log files. It's a small standalone
Perl script that outperforms any other PostgreSQL log analyzer.
It is written in pure Perl and uses a JavaScript library (flotr2) to draw
graphs so that you don't need to install any additional Perl modules or
other packages. Furthermore, this library gives us more features such
as zooming. pgBadger also uses the Bootstrap JavaScript library and
the FontAwesome webfont for better design. Everything is embedded.
pgBadger is able to autodetect your log file format (syslog, stderr, csvlog
or jsonlog) if the file is long enough. It is designed to parse huge log
files as well as compressed files. Supported compressed formats are gzip,
bzip2, lz4, xz, zip and zstd. For the xz format you must have an xz version
higher than 5.05 that supports the --robot option. lz4 files must be
compressed with the --content-size option for pgbadger to determine the
uncompressed file size. For the complete list of features, see below.
All charts are zoomable and can be saved as PNG images.
You can also limit pgBadger to only report errors or remove any part of the
report using command-line options.
pgBadger supports any custom format set in the log\_line\_prefix directive of
your postgresql.conf file as long as it at least specifies the %t and %p patterns.
pgBadger allows parallel processing of a single log file or multiple
files through the use of the -j option specifying the number of CPUs.
If you want to save system performance you can also use log\_duration instead of
log\_min\_duration\_statement to have reports on duration and number of queries only.
### FEATURE
pgBadger reports everything about your SQL queries:
Overall statistics.
The most frequent waiting queries.
Queries that waited the most.
Queries generating the most temporary files.
Queries generating the largest temporary files.
The slowest queries.
Queries that took up the most time.
The most frequent queries.
The most frequent errors.
Histogram of query times.
Histogram of sessions times.
Users involved in top queries.
Applications involved in top queries.
Queries generating the most cancellation.
Queries most cancelled.
The most time consuming prepare/bind queries
The following reports are also available with hourly charts divided into
periods of five minutes:
SQL queries statistics.
Temporary file statistics.
Checkpoints statistics.
Autovacuum and autoanalyze statistics.
Cancelled queries.
Error events (panic, fatal, error and warning).
Error class distribution.
There are also some pie charts about distribution of:
Locks statistics.
Queries by type (select/insert/update/delete).
Distribution of queries type per database/application
Sessions per database/user/client/application.
Connections per database/user/client/application.
Autovacuum and autoanalyze per table.
Queries per user and total duration per user.
All charts are zoomable and can be saved as PNG images. SQL queries reported are
highlighted and beautified automatically.
pgBadger is also able to parse PgBouncer log files and to create the following
reports:
Request Throughput
Bytes I/O Throughput
Average Query Duration
Simultaneous sessions
Histogram of sessions times
Sessions per database
Sessions per user
Sessions per host
Established connections
Connections per database
Connections per user
Connections per host
Most used reserved pools
Most Frequent Errors/Events
You can also have incremental reports with one report per day and a cumulative
report per week. Two multiprocess modes are available to speed up log parsing,
one using one core per log file, and the second using multiple cores to parse
a single file. These modes can be combined.
Histogram granularity can be adjusted using the -A command-line option. By default,
they will report the mean of each top queries/errors occurring per hour, but you can
specify the granularity down to the minute.
pgBadger can also be used in a central place to parse remote log files using a
passwordless SSH connection. This mode can be used with compressed files and in
the multiprocess per file mode (-J), but cannot be used with the CSV log format.
Examples of reports can be found here: https://pgbadger.darold.net/#reports
### REQUIREMENT
pgBadger comes as a single Perl script - you do not need anything other than a modern
Perl distribution. Charts are rendered using a JavaScript library, so you don't need
anything other than a web browser. Your browser will do all the work.
If you plan to parse PostgreSQL CSV log files, you might need some Perl Modules:
Text::CSV_XS - to parse PostgreSQL CSV log files.
This module is optional, if you don't have PostgreSQL log in the CSV format, you don't
need to install it.
If you want to export statistics as JSON file, you need an additional Perl module:
JSON::XS - JSON serialising/deserialising, done correctly and fast
This module is optional, if you don't select the json output format, you don't
need to install it. You can install it on a Debian-like system using:
sudo apt-get install libjson-xs-perl
and on RPM-like system using:
sudo yum install perl-JSON-XS
Compressed log file format is autodetected from the file extension. If pgBadger finds
a gz extension, it will use the zcat utility; with a bz2 extension, it will use bzcat;
with lz4, it will use lz4cat; with zst, it will use zstdcat; if the file extension
is zip or xz, then the unzip or xz utility will be used.
If those utilities are not found in the PATH environment variable, then use the --zcat
command-line option to change this path. For example:
--zcat="/usr/local/bin/gunzip -c" or --zcat="/usr/local/bin/bzip2 -dc"
--zcat="C:\tools\unzip -p"
By default, pgBadger will use the zcat, bzcat, lz4cat, zstdcat and unzip utilities
following the file extension. If you use the default autodetection of compression format,
you can mix gz, bz2, lz4, xz, zip or zstd files. Specifying a custom value of
\--zcat option will remove the possibility of mixed compression format.
Note that multiprocessing cannot be used with compressed files or CSV files as
well as under Windows platform.
### INSTALLATION
Download the tarball from GitHub and unpack the archive as follow:
tar xzf pgbadger-11.x.tar.gz
cd pgbadger-11.x/
perl Makefile.PL
make && sudo make install
This will copy the Perl script pgbadger to /usr/local/bin/pgbadger by default and the
man page into /usr/local/share/man/man1/pgbadger.1. Those are the default installation
directories for 'site' install.
If you want to install all under /usr/ location, use INSTALLDIRS='perl' as an argument
of Makefile.PL. The script will be installed into /usr/bin/pgbadger and the manpage
into /usr/share/man/man1/pgbadger.1.
For example, to install everything just like Debian does, proceed as follows:
perl Makefile.PL INSTALLDIRS=vendor
By default, INSTALLDIRS is set to site.
### POSTGRESQL CONFIGURATION
You must enable and set some configuration directives in your postgresql.conf
before starting.
You must first enable SQL query logging to have something to parse:
log_min_duration_statement = 0
Here every statement will be logged, on a busy server you may want to increase
this value to only log queries with a longer duration. Note that if you have
log\_statement set to 'all', nothing will be logged through the log\_min\_duration\_statement
directive. See the next chapter for more information.
pgBadger supports any custom format set in the log\_line\_prefix directive of
your postgresql.conf file as long as it at least specifies a time escape sequence
(%t, %m or %n) and a process-related escape sequence (%p or %c).
For example, with 'stderr' log format, log\_line\_prefix must be at least:
log_line_prefix = '%t [%p]: '
Log line prefix could add user, database name, application name and client ip
address as follows:
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
or for syslog log file format:
log_line_prefix = 'user=%u,db=%d,app=%a,client=%h '
Log line prefix for stderr output could also be:
log_line_prefix = '%t [%p]: db=%d,user=%u,app=%a,client=%h '
or for syslog output:
log_line_prefix = 'db=%d,user=%u,app=%a,client=%h '
You need to enable other parameters in postgresql.conf to get more information from your log files:
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
Do not enable log\_statement as its log format will not be parsed by pgBadger.
Of course your log messages should be in English with or without locale support:
lc_messages='en_US.UTF-8'
lc_messages='C'
pgBadger parser does not support other locales, like 'fr\_FR.UTF-8' for example.
### LOG STATEMENTS
Considerations about log\_min\_duration\_statement, log\_duration and log\_statement
configuration directives.
If you want the query statistics to include the actual query strings, you
must set log\_min\_duration\_statement to 0 or more milliseconds.
If you just want to report duration and number of queries and don't want all
details about queries, set log\_min\_duration\_statement to -1 to disable it and
enable log\_duration in your postgresql.conf file. If you want to add the most
common query report, you can either choose to set log\_min\_duration\_statement
to a higher value or to enable log\_statement.
Enabling log\_min\_duration\_statement will add reports about slowest queries and
queries that took up the most time. Take care that if you have log\_statement
set to 'all', nothing will be logged with log\_min\_duration\_statement.
Warning: Do not enable both log\_min\_duration\_statement, log\_duration and
log\_statement all together, this will result in wrong counter values. Note
that this will also increase drastically the size of your log.
log\_min\_duration\_statement should always be preferred.
### PARALLEL PROCESSING
To enable parallel processing you just have to use the -j N option where N is
the number of cores you want to use.
pgBadger will then proceed as follow:
for each log file
chunk size = int(file size / N)
look at start/end offsets of these chunks
fork N processes and seek to the start offset of each chunk
each process will terminate when the parser reach the end offset
of its chunk
each process write stats into a binary temporary file
wait for all children processes to terminate
All binary temporary files generated will then be read and loaded into
memory to build the html output.
With that method, at start/end of chunks pgBadger may truncate or omit a
maximum of N queries per log file, which is an insignificant gap if you have
millions of queries in your log file. The chance that the query that you were
looking for is lost is near 0, this is why I think this gap is livable. Most
of the time the query is counted twice but truncated.
When you have many small log files and many CPUs, it is speedier to dedicate
one core to one log file at a time. To enable this behavior, you have to use
option -J N instead. With 200 log files of 10MB each, the use of the -J option
starts being really interesting with 8 cores. Using this method you will be
sure not to lose any queries in the reports.
Here is a benchmark done on a server with 8 CPUs and a single file of 9.5GB.
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+---------+-------+-------+------
-j | 1h41m18 | 50m25 | 25m39 | 15m58
-J | 1h41m18 | 54m28 | 41m16 | 34m45
With 200 log files of 10MB each, so 2GB in total, the results are slightly
different:
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+-------+-------+-------+------
-j | 20m15 | 9m56 | 5m20 | 4m20
-J | 20m15 | 9m49 | 5m00 | 2m40
So it is recommended to use -j unless you have hundreds of small log files
and can use at least 8 CPUs.
IMPORTANT: when you are using parallel parsing, pgBadger will generate a
lot of temporary files in the /tmp directory and will remove them at the
end, so do not remove those files unless pgBadger is not running. They are
all named with the following template tmp\_pgbadgerXXXX.bin so they can be
easily identified.
### INCREMENTAL REPORTS
pgBadger includes an automatic incremental report mode using option -I or
\--incremental. When running in this mode, pgBadger will generate one report
per day and a cumulative report per week. Output is first done in binary
format into the mandatory output directory (see option -O or --outdir),
then in HTML format for daily and weekly reports with a main index file.
The main index file will show a dropdown menu per week with a link to each
week report and links to daily reports of each week.
For example, if you run pgBadger as follows based on a daily rotated file:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
you will have all daily and weekly reports for the full running period.
In this mode, pgBadger will create an automatic incremental file in the
output directory, so you don't have to use the -l option unless you want
to change the path of that file. This means that you can run pgBadger in
this mode each day on a log file rotated each week, and it will not count
the log entries twice.
To save disk space, you may want to use the -X or --extra-files command-line
option to force pgBadger to write JavaScript and CSS to separate files in
the output directory. The resources will then be loaded using script and
link tags.
#### Rebuilding reports
Incremental reports can be rebuilt after a pgbadger report fix or a new
feature to update all HTML reports. To rebuild all reports where a binary
file is still present, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
#### Monthly reports
By default, pgBadger in incremental mode only computes daily and weekly reports.
If you want monthly cumulative reports, you will have to use a separate command
to specify the report to build. For example, to build a report for August 2019:
pgbadger -X --month-report 2019-08 /var/www/pg_reports/
this will add a link to the month name into the calendar view of incremental
reports to look at monthly report. The report for a current month can be run
every day, it is entirely rebuilt each time. The monthly report is not built by
default because it could take a lot of time following the amount of data.
If reports were built with the per-database option ( -E | --explode ), it must
be used too when calling pgbadger to build monthly report:
pgbadger -E -X --month-report 2019-08 /var/www/pg_reports/
This is the same when using the rebuild option ( -R | --rebuild ).
### BINARY FORMAT
Using the binary format it is possible to create custom incremental and
cumulative reports. For example, if you want to refresh a pgBadger
report each hour from a daily PostgreSQL log file, you can proceed by
running the following commands each hour:
pgbadger --last-parsed .pgbadger_last_state_file -o sunday/hourX.bin /var/log/pgsql/postgresql-Sun.log
to generate the incremental data files in binary format. And to generate the fresh HTML
report from that binary file:
pgbadger sunday/*.bin
Or as another example, if you generate one log file per hour and you want
reports to be rebuilt each time the log file is rotated, proceed as
follows:
pgbadger -o day1/hour01.bin /var/log/pgsql/pglog/postgresql-2012-03-23_10.log
pgbadger -o day1/hour02.bin /var/log/pgsql/pglog/postgresql-2012-03-23_11.log
pgbadger -o day1/hour03.bin /var/log/pgsql/pglog/postgresql-2012-03-23_12.log
...
When you want to refresh the HTML report, for example, each time after a new binary file
is generated, just do the following:
pgbadger -o day1_report.html day1/*.bin
Adjust the commands to suit your particular needs.
### JSON FORMAT
JSON format is good for sharing data with other languages, which makes it
easy to integrate pgBadger result into other monitoring tools, like Cacti
or Graphite.
### AUTHORS
pgBadger is an original work from Gilles Darold.
The pgBadger logo is an original creation of Damien Cazeils.
The pgBadger v4.x design comes from the "Art is code" company.
This web site is a work of Gilles Darold.
pgBadger is maintained by Gilles Darold and everyone who wants to contribute.
Many people have contributed to pgBadger, they are all quoted in the Changelog file.
### LICENSE
pgBadger is free software distributed under the PostgreSQL Licence.
Copyright (c) 2012-2025, Gilles Darold
A modified version of the SQL::Beautify Perl Module is embedded in pgBadger
with copyright (C) 2009 by Jonas Kramer and is published under the terms of
the Artistic License 2.0.
pgbadger-13.1/doc/ 0000775 0000000 0000000 00000000000 14765535576 0014005 5 ustar 00root root 0000000 0000000 pgbadger-13.1/doc/pgBadger.pod 0000664 0000000 0000000 00000106602 14765535576 0016231 0 ustar 00root root 0000000 0000000 =head1 NAME
pgBadger - a fast PostgreSQL log analysis report
=head1 SYNOPSIS
Usage: pgbadger [options] logfile [...]
PostgreSQL log analyzer with fully detailed reports and graphs.
Arguments:
logfile can be a single log file, a list of files, or a shell command
returning a list of files. If you want to pass log content from stdin
use - as filename. Note that input from stdin will not work with csvlog.
Options:
-a | --average minutes : number of minutes to build the average graphs of
queries and connections. Default 5 minutes.
-A | --histo-average min: number of minutes to build the histogram graphs
of queries. Default 60 minutes.
-b | --begin datetime : start date/time for the data to be parsed in log
(either a timestamp or a time)
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
-D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log
(either a timestamp or a time)
-E | --explode : explode the main report by generating one report
per database. Global information not related to a
database is added to the postgres database report.
-f | --format logtype : possible values: syslog, syslog2, stderr, jsonlog,
csv, pgbouncer, logplex, rds and redshift. Use this
option when pgBadger is not able to detect the log
format.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-H | --html-outdir path: path to directory where HTML report must be written
in incremental mode, binary files stay on directory
defined with -O, --outdir option.
-i | --ident name : programname used as syslog ident. Default: postgres
-I | --incremental : use incremental mode, reports will be generated by
days in a separate directory, --outdir must be set.
-j | --jobs number : number of jobs to run at same time for a single log
file. Run as single by default or when working with
csvlog format.
-J | --Jobs number : number of log files to parse in parallel. Process
one file at a time by default.
-l | --last-parsed file: allow incremental log parsing by registering the
last datetime and line parsed. Useful if you want
to watch errors since last run or if you want one
report per day with a log rotated each week.
-L | --logfile-list file:file containing a list of log files to parse.
-m | --maxlength size : maximum length of a query, it will be restricted to
the given size. Default truncate size is 100000.
-M | --no-multiline : do not collect multiline statements to avoid garbage
especially on errors that generate a huge report.
-N | --appname name : only report on entries for given application name
-o | --outfile filename: define the filename for the output. Default depends
on the output format: out.html, out.txt, out.bin,
or out.json. This option can be used multiple times
to output several formats. To use json output, the
Perl module JSON::XS must be installed, to dump
output to stdout, use - as filename.
-O | --outdir path : directory where out files must be saved.
-p | --prefix string : the value of your custom log_line_prefix as
defined in your postgresql.conf. Only use it if you
aren't using one of the standard prefixes specified
in the pgBadger documentation, such as if your
prefix includes additional variables like client IP
or application name. MUST contain escape sequences
for time (%t, %m or %n) and processes (%p or %c).
See examples below.
-P | --no-prettify : disable SQL queries prettify formatter.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-Q | --query-numbering : add numbering of queries to the output when using
options --dump-all-queries or --normalized-only.
-r | --remote-host ip : set the host where to execute the cat command on
remote log file to parse the file locally.
-R | --retention N : number of weeks to keep in incremental mode. Defaults
to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directories are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-S | --select-only : only report SELECT queries.
-t | --top number : number of queries to store/display. Default: 20.
-T | --title string : change title of the HTML page report.
-u | --dbuser username : only report on entries for the given user.
-U | --exclude-user username : exclude entries for the specified user from
report. Can be used multiple time.
-v | --verbose : enable verbose or debug mode. Disabled by default.
-V | --version : show pgBadger version and exit.
-w | --watch-mode : only report errors just like logwatch could do.
-W | --wide-char : encode html output of queries into UTF8 to avoid
Perl message "Wide character in print".
-x | --extension : output format. Values: text, html, bin or json.
Default: html
-X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat, bzcat or unzip is not in your path.
-Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
Use this to adjust date/time in JavaScript graphs.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-file filename: path of the file that contains each regex to use
to exclude queries from the report. One regex per
line.
--include-query regex : any query that does not match the given regex will
be excluded from the report. You can use this
option multiple times. For example: "(tbl1|tbl2)".
--include-file filename: path of the file that contains each regex to the
queries to include from the report. One regex per
line.
--disable-error : do not generate error report.
--disable-hourly : do not generate hourly report.
--disable-type : do not generate report of queries by type, database
or user.
--disable-query : do not generate query reports (slowest, most
frequent, queries by users, by database, ...).
--disable-session : do not generate session report.
--disable-connection : do not generate connection report.
--disable-lock : do not generate lock report.
--disable-temporary : do not generate temporary report.
--disable-checkpoint : do not generate checkpoint/restartpoint report.
--disable-autovacuum : do not generate autovacuum report.
--charset : used to set the HTML charset to be used.
Default: utf-8.
--csv-separator : used to set the CSV field separator, default: ,
--exclude-time regex : any timestamp matching the given regex will be
excluded from the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--include-time regex : only timestamps matching the given regex will be
included in the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--exclude-db name : exclude entries for the specified database from
report. Example: "pg_dump". Can be used multiple
times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump". Can be used
multiple times.
--exclude-line regex : exclude any log entry that will match the given
regex. Can be used multiple times.
--exclude-client name : exclude log entries for the specified client ip.
Can be used multiple times.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
--noreport : no reports will be created in incremental mode.
--log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add an md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir path : set the path where the pid file must be stored.
Default /tmp
--pid-file file : set the name of the pid file to manage concurrent
execution of pgBadger. Default: pgbadger.pid
--rebuild : used to rebuild all html reports in incremental
output directories where there's binary data files.
--pgbouncer-only : only show PgBouncer-related menus in the header.
--start-monday : in incremental mode, calendar weeks start on
Sunday. Use this option to start on a Monday.
--iso-week-number : in incremental mode, calendar weeks start on
Monday and respect the ISO 8601 week number, range
01 to 53, where week 1 is the first week that has
at least 4 days in the new year.
--normalized-only : only dump all normalized queries to out.txt
--log-timezone +/-XX : Set the number of hours from GMT of the timezone
that must be used to adjust date/time read from
log file before beeing parsed. Using this option
makes log search with a date/time more difficult.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--prettify-json : use it if you want json output to be prettified.
--month-report YYYY-MM : create a cumulative HTML report over the specified
month. Requires incremental output directories and
the presence of all necessary binary data files
--day-report YYYY-MM-DD: create an HTML report over the specified day.
Requires incremental output directories and the
presence of all necessary binary data files
--noexplain : do not process lines generated by auto_explain.
--command CMD : command to execute to retrieve log entries on
stdin. pgBadger will open a pipe to the command
and parse log entries generated by the command.
--no-week : inform pgbadger to not build weekly reports in
incremental mode. Useful if it takes too much time.
--explain-url URL : use it to override the url of the graphical explain
tool. Default: https://explain.depesz.com/
--tempdir DIR : set directory where temporary files will be written
Default: File::Spec->tmpdir() || '/tmp'
--no-process-info : disable changing process title to help identify
pgbadger process, some system do not support it.
--dump-all-queries : dump all queries found in the log file replacing
bind parameters included in the queries at their
respective placeholders positions.
--keep-comments : do not remove comments from normalized queries. It
can be useful if you want to distinguish between
same normalized queries.
--no-progressbar : disable progressbar.
--dump-raw-csv : parse the log and dump the information into CSV
format. No further processing is done, no report.
--include-pid PID : only report events related to the session pid (%p).
Can be used multiple time.
--include-session ID : only report events related to the session id (%c).
Can be used multiple time.
--histogram-query VAL : use custom inbound for query times histogram.
Default inbound in milliseconds:
0,1,5,10,25,50,100,500,1000,10000
--histogram-session VAL : use custom inbound for session times histogram.
Default inbound in milliseconds:
0,500,1000,30000,60000,600000,1800000,3600000,28800000
--no-fork : do not fork any process, for debugging purpose.
pgBadger is able to parse a remote log file using a passwordless ssh connection.
Use -r or --remote-host to set the host IP address or hostname. There are also
some additional options to fully control the ssh connection.
--ssh-program ssh path to the ssh program to use. Default: ssh.
--ssh-port port ssh port to use for the connection. Default: 22.
--ssh-user username connection login name. Defaults to running user.
--ssh-identity file path to the identity file to use.
--ssh-timeout second timeout to ssh connection failure. Default: 10 sec.
--ssh-option options list of -o options to use for the ssh connection.
Options always used:
-o ConnectTimeout=$ssh_timeout
-o PreferredAuthentications=hostbased,publickey
Log file to parse can also be specified using an URI, supported protocols are
http[s] and [s]ftp. The curl command will be used to download the file, and the
file will be parsed during download. The ssh protocol is also supported and will
use the ssh command like with the remote host use. See examples bellow.
Return codes:
0: on success
1: die on error
2: if it has been interrupted using ctr+c for example
3: the pid file already exists or can not be created
4: no log file was given at command line
Examples:
pgbadger /var/log/postgresql.log
pgbadger /var/log/postgres.log.2.gz /var/log/postgres.log.1.gz /var/log/postgres.log
pgbadger /var/log/postgresql/postgresql-2012-05-*
pgbadger --exclude-query="^(COPY|COMMIT)" /var/log/postgresql.log
pgbadger -b "2012-06-25 10:56:11" -e "2012-06-25 10:59:11" /var/log/postgresql.log
cat /var/log/postgres.log | pgbadger -
# Log line prefix with stderr log output
pgbadger --prefix '%t [%p]: user=%u,db=%d,client=%h' /pglog/postgresql-2012-08-21*
pgbadger --prefix '%m %u@%d %p %r %a : ' /pglog/postgresql.log
# Log line prefix with syslog log output
pgbadger --prefix 'user=%u,db=%d,client=%h,appname=%a' /pglog/postgresql-2012-08-21*
# Use my 8 CPUs to parse my 10GB file faster, much faster
pgbadger -j 8 /pglog/postgresql-10.1-main.log
Use URI notation for remote log file:
pgbadger http://172.12.110.1//var/log/postgresql/postgresql-10.1-main.log
pgbadger ftp://username@172.12.110.14/postgresql-10.1-main.log
pgbadger ssh://username@172.12.110.14:2222//var/log/postgresql/postgresql-10.1-main.log*
You can use together a local PostgreSQL log and a remote pgbouncer log file to parse:
pgbadger /var/log/postgresql/postgresql-10.1-main.log ssh://username@172.12.110.14/pgbouncer.log
Reporting errors every week by cron job:
30 23 * * 1 /usr/bin/pgbadger -q -w /var/log/postgresql.log -o /var/reports/pg_errors.html
Generate report every week using incremental behavior:
0 4 * * 1 /usr/bin/pgbadger -q `find /var/log/ -mtime -7 -name "postgresql.log*"` -o /var/reports/pg_errors-`date +\%F`.html -l /var/reports/pgbadger_incremental_file.dat
This supposes that your log file and HTML report are also rotated every week.
Or better, use the auto-generated incremental reports:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
will generate a report per day and per week.
In incremental mode, you can also specify the number of weeks to keep in the
reports:
/usr/bin/pgbadger --retention 2 -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can
use pgBadger as follow to exclude these periods from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
This will help avoid having COPY statements, as generated by pg_dump, on top of
the list of slowest queries. You can also use --exclude-appname "pg_dump" to
solve this problem in a simpler way.
You can also parse journalctl output just as if it was a log file:
pgbadger --journalctl 'journalctl -u postgresql-9.5'
or worst, call it from a remote host:
pgbadger -r 192.168.1.159 --journalctl 'journalctl -u postgresql-9.5'
you don't need to specify any log file at command line, but if you have other
PostgreSQL log files to parse, you can add them as usual.
To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
pgBadger also supports Heroku PostgreSQL logs using logplex format:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
this will stream Heroku PostgreSQL log to pgbadger through stdin.
pgBadger can auto detect RDS and cloudwatch PostgreSQL logs using
rds format:
pgbadger -f rds -o rds_out.html rds.log
Each CloudSQL Postgresql log is a fairly normal PostgreSQL log, but encapsulated
in JSON format. It is autodetected by pgBadger but in case you need to force
the log format use `jsonlog`:
pgbadger -f jsonlog -o cloudsql_out.html cloudsql.log
This is the same as with the jsonlog extension, the json format is different
but pgBadger can parse both formats.
pgBadger also supports logs produced by CloudNativePG Postgres operator for Kubernetes:
pgbadger -f jsonlog -o cnpg_out.html cnpg.log
To create a cumulative report over a month use command:
pgbadger --month-report 2919-05 /path/to/incremental/reports/
this will add a link to the month name into the calendar view in
incremental reports to look at report for month 2019 May.
Use -E or --explode if the reports were built using this option.
=head1 DESCRIPTION
pgBadger is a PostgreSQL log analyzer built for speed providing fully
detailed reports based on your PostgreSQL log files. It's a small standalone
Perl script that outperforms any other PostgreSQL log analyzer.
It is written in pure Perl and uses a JavaScript library (flotr2) to draw
graphs so that you don't need to install any additional Perl modules or
other packages. Furthermore, this library gives us more features such
as zooming. pgBadger also uses the Bootstrap JavaScript library and
the FontAwesome webfont for better design. Everything is embedded.
pgBadger is able to autodetect your log file format (syslog, stderr, csvlog
or jsonlog) if the file is long enough. It is designed to parse huge log
files as well as compressed files. Supported compressed formats are gzip,
bzip2, lz4, xz, zip and zstd. For the xz format you must have an xz version
higher than 5.05 that supports the --robot option. lz4 files must be
compressed with the --content-size option for pgbadger to determine the
uncompressed file size. For the complete list of features, see below.
All charts are zoomable and can be saved as PNG images.
You can also limit pgBadger to only report errors or remove any part of the
report using command-line options.
pgBadger supports any custom format set in the log_line_prefix directive of
your postgresql.conf file as long as it at least specifies the %t and %p patterns.
pgBadger allows parallel processing of a single log file or multiple
files through the use of the -j option specifying the number of CPUs.
If you want to save system performance you can also use log_duration instead of
log_min_duration_statement to have reports on duration and number of queries only.
=head1 FEATURE
pgBadger reports everything about your SQL queries:
Overall statistics.
The most frequent waiting queries.
Queries that waited the most.
Queries generating the most temporary files.
Queries generating the largest temporary files.
The slowest queries.
Queries that took up the most time.
The most frequent queries.
The most frequent errors.
Histogram of query times.
Histogram of sessions times.
Users involved in top queries.
Applications involved in top queries.
Queries generating the most cancellation.
Queries most cancelled.
The most time consuming prepare/bind queries
The following reports are also available with hourly charts divided into
periods of five minutes:
SQL queries statistics.
Temporary file statistics.
Checkpoints statistics.
Autovacuum and autoanalyze statistics.
Cancelled queries.
Error events (panic, fatal, error and warning).
Error class distribution.
There are also some pie charts about distribution of:
Locks statistics.
Queries by type (select/insert/update/delete).
Distribution of queries type per database/application
Sessions per database/user/client/application.
Connections per database/user/client/application.
Autovacuum and autoanalyze per table.
Queries per user and total duration per user.
All charts are zoomable and can be saved as PNG images. SQL queries reported are
highlighted and beautified automatically.
pgBadger is also able to parse PgBouncer log files and to create the following
reports:
Request Throughput
Bytes I/O Throughput
Average Query Duration
Simultaneous sessions
Histogram of sessions times
Sessions per database
Sessions per user
Sessions per host
Established connections
Connections per database
Connections per user
Connections per host
Most used reserved pools
Most Frequent Errors/Events
You can also have incremental reports with one report per day and a cumulative
report per week. Two multiprocess modes are available to speed up log parsing,
one using one core per log file, and the second using multiple cores to parse
a single file. These modes can be combined.
Histogram granularity can be adjusted using the -A command-line option. By default,
they will report the mean of each top queries/errors occurring per hour, but you can
specify the granularity down to the minute.
pgBadger can also be used in a central place to parse remote log files using a
passwordless SSH connection. This mode can be used with compressed files and in
the multiprocess per file mode (-J), but cannot be used with the CSV log format.
Examples of reports can be found here: https://pgbadger.darold.net/#reports
=head1 REQUIREMENT
pgBadger comes as a single Perl script - you do not need anything other than a modern
Perl distribution. Charts are rendered using a JavaScript library, so you don't need
anything other than a web browser. Your browser will do all the work.
If you plan to parse PostgreSQL CSV log files, you might need some Perl Modules:
Text::CSV_XS - to parse PostgreSQL CSV log files.
This module is optional, if you don't have PostgreSQL log in the CSV format, you don't
need to install it.
If you want to export statistics as JSON file, you need an additional Perl module:
JSON::XS - JSON serialising/deserialising, done correctly and fast
This module is optional, if you don't select the json output format, you don't
need to install it. You can install it on a Debian-like system using:
sudo apt-get install libjson-xs-perl
and on RPM-like system using:
sudo yum install perl-JSON-XS
Compressed log file format is autodetected from the file extension. If pgBadger finds
a gz extension, it will use the zcat utility; with a bz2 extension, it will use bzcat;
with lz4, it will use lz4cat; with zst, it will use zstdcat; if the file extension
is zip or xz, then the unzip or xz utility will be used.
If those utilities are not found in the PATH environment variable, then use the --zcat
command-line option to change this path. For example:
--zcat="/usr/local/bin/gunzip -c" or --zcat="/usr/local/bin/bzip2 -dc"
--zcat="C:\tools\unzip -p"
By default, pgBadger will use the zcat, bzcat, lz4cat, zstdcat and unzip utilities
following the file extension. If you use the default autodetection of compression format,
you can mix gz, bz2, lz4, xz, zip or zstd files. Specifying a custom value of
--zcat option will remove the possibility of mixed compression format.
Note that multiprocessing cannot be used with compressed files or CSV files as
well as under Windows platform.
=head1 INSTALLATION
Download the tarball from GitHub and unpack the archive as follow:
tar xzf pgbadger-11.x.tar.gz
cd pgbadger-11.x/
perl Makefile.PL
make && sudo make install
This will copy the Perl script pgbadger to /usr/local/bin/pgbadger by default and the
man page into /usr/local/share/man/man1/pgbadger.1. Those are the default installation
directories for 'site' install.
If you want to install all under /usr/ location, use INSTALLDIRS='perl' as an argument
of Makefile.PL. The script will be installed into /usr/bin/pgbadger and the manpage
into /usr/share/man/man1/pgbadger.1.
For example, to install everything just like Debian does, proceed as follows:
perl Makefile.PL INSTALLDIRS=vendor
By default, INSTALLDIRS is set to site.
=head1 POSTGRESQL CONFIGURATION
You must enable and set some configuration directives in your postgresql.conf
before starting.
You must first enable SQL query logging to have something to parse:
log_min_duration_statement = 0
Here every statement will be logged, on a busy server you may want to increase
this value to only log queries with a longer duration. Note that if you have
log_statement set to 'all', nothing will be logged through the log_min_duration_statement
directive. See the next chapter for more information.
pgBadger supports any custom format set in the log_line_prefix directive of
your postgresql.conf file as long as it at least specifies a time escape sequence
(%t, %m or %n) and a process-related escape sequence (%p or %c).
For example, with 'stderr' log format, log_line_prefix must be at least:
log_line_prefix = '%t [%p]: '
Log line prefix could add user, database name, application name and client ip
address as follows:
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
or for syslog log file format:
log_line_prefix = 'user=%u,db=%d,app=%a,client=%h '
Log line prefix for stderr output could also be:
log_line_prefix = '%t [%p]: db=%d,user=%u,app=%a,client=%h '
or for syslog output:
log_line_prefix = 'db=%d,user=%u,app=%a,client=%h '
You need to enable other parameters in postgresql.conf to get more information from your log files:
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
Do not enable log_statement as its log format will not be parsed by pgBadger.
Of course your log messages should be in English with or without locale support:
lc_messages='en_US.UTF-8'
lc_messages='C'
pgBadger parser does not support other locales, like 'fr_FR.UTF-8' for example.
=head1 LOG STATEMENTS
Considerations about log_min_duration_statement, log_duration and log_statement
configuration directives.
If you want the query statistics to include the actual query strings, you
must set log_min_duration_statement to 0 or more milliseconds.
If you just want to report duration and number of queries and don't want all
details about queries, set log_min_duration_statement to -1 to disable it and
enable log_duration in your postgresql.conf file. If you want to add the most
common query report, you can either choose to set log_min_duration_statement
to a higher value or to enable log_statement.
Enabling log_min_duration_statement will add reports about slowest queries and
queries that took up the most time. Take care that if you have log_statement
set to 'all', nothing will be logged with log_min_duration_statement.
Warning: Do not enable both log_min_duration_statement, log_duration and
log_statement all together, this will result in wrong counter values. Note
that this will also increase drastically the size of your log.
log_min_duration_statement should always be preferred.
=head1 PARALLEL PROCESSING
To enable parallel processing you just have to use the -j N option where N is
the number of cores you want to use.
pgBadger will then proceed as follow:
for each log file
chunk size = int(file size / N)
look at start/end offsets of these chunks
fork N processes and seek to the start offset of each chunk
each process will terminate when the parser reach the end offset
of its chunk
each process write stats into a binary temporary file
wait for all children processes to terminate
All binary temporary files generated will then be read and loaded into
memory to build the html output.
With that method, at start/end of chunks pgBadger may truncate or omit a
maximum of N queries per log file, which is an insignificant gap if you have
millions of queries in your log file. The chance that the query that you were
looking for is lost is near 0, this is why I think this gap is livable. Most
of the time the query is counted twice but truncated.
When you have many small log files and many CPUs, it is speedier to dedicate
one core to one log file at a time. To enable this behavior, you have to use
option -J N instead. With 200 log files of 10MB each, the use of the -J option
starts being really interesting with 8 cores. Using this method you will be
sure not to lose any queries in the reports.
Here is a benchmark done on a server with 8 CPUs and a single file of 9.5GB.
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+---------+-------+-------+------
-j | 1h41m18 | 50m25 | 25m39 | 15m58
-J | 1h41m18 | 54m28 | 41m16 | 34m45
With 200 log files of 10MB each, so 2GB in total, the results are slightly
different:
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+-------+-------+-------+------
-j | 20m15 | 9m56 | 5m20 | 4m20
-J | 20m15 | 9m49 | 5m00 | 2m40
So it is recommended to use -j unless you have hundreds of small log files
and can use at least 8 CPUs.
IMPORTANT: when you are using parallel parsing, pgBadger will generate a
lot of temporary files in the /tmp directory and will remove them at the
end, so do not remove those files unless pgBadger is not running. They are
all named with the following template tmp_pgbadgerXXXX.bin so they can be
easily identified.
=head1 INCREMENTAL REPORTS
pgBadger includes an automatic incremental report mode using option -I or
--incremental. When running in this mode, pgBadger will generate one report
per day and a cumulative report per week. Output is first done in binary
format into the mandatory output directory (see option -O or --outdir),
then in HTML format for daily and weekly reports with a main index file.
The main index file will show a dropdown menu per week with a link to each
week report and links to daily reports of each week.
For example, if you run pgBadger as follows based on a daily rotated file:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
you will have all daily and weekly reports for the full running period.
In this mode, pgBadger will create an automatic incremental file in the
output directory, so you don't have to use the -l option unless you want
to change the path of that file. This means that you can run pgBadger in
this mode each day on a log file rotated each week, and it will not count
the log entries twice.
To save disk space, you may want to use the -X or --extra-files command-line
option to force pgBadger to write JavaScript and CSS to separate files in
the output directory. The resources will then be loaded using script and
link tags.
=head2 Rebuilding reports
Incremental reports can be rebuilt after a pgbadger report fix or a new
feature to update all HTML reports. To rebuild all reports where a binary
file is still present, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
=head2 Monthly reports
By default, pgBadger in incremental mode only computes daily and weekly reports.
If you want monthly cumulative reports, you will have to use a separate command
to specify the report to build. For example, to build a report for August 2019:
pgbadger -X --month-report 2019-08 /var/www/pg_reports/
this will add a link to the month name into the calendar view of incremental
reports to look at monthly report. The report for a current month can be run
every day, it is entirely rebuilt each time. The monthly report is not built by
default because it could take a lot of time following the amount of data.
If reports were built with the per-database option ( -E | --explode ), it must
be used too when calling pgbadger to build monthly report:
pgbadger -E -X --month-report 2019-08 /var/www/pg_reports/
This is the same when using the rebuild option ( -R | --rebuild ).
=head1 BINARY FORMAT
Using the binary format it is possible to create custom incremental and
cumulative reports. For example, if you want to refresh a pgBadger
report each hour from a daily PostgreSQL log file, you can proceed by
running the following commands each hour:
pgbadger --last-parsed .pgbadger_last_state_file -o sunday/hourX.bin /var/log/pgsql/postgresql-Sun.log
to generate the incremental data files in binary format. And to generate the fresh HTML
report from that binary file:
pgbadger sunday/*.bin
Or as another example, if you generate one log file per hour and you want
reports to be rebuilt each time the log file is rotated, proceed as
follows:
pgbadger -o day1/hour01.bin /var/log/pgsql/pglog/postgresql-2012-03-23_10.log
pgbadger -o day1/hour02.bin /var/log/pgsql/pglog/postgresql-2012-03-23_11.log
pgbadger -o day1/hour03.bin /var/log/pgsql/pglog/postgresql-2012-03-23_12.log
...
When you want to refresh the HTML report, for example, each time after a new binary file
is generated, just do the following:
pgbadger -o day1_report.html day1/*.bin
Adjust the commands to suit your particular needs.
=head1 JSON FORMAT
JSON format is good for sharing data with other languages, which makes it
easy to integrate pgBadger result into other monitoring tools, like Cacti
or Graphite.
=head1 AUTHORS
pgBadger is an original work from Gilles Darold.
The pgBadger logo is an original creation of Damien Cazeils.
The pgBadger v4.x design comes from the "Art is code" company.
This web site is a work of Gilles Darold.
pgBadger is maintained by Gilles Darold and everyone who wants to contribute.
Many people have contributed to pgBadger, they are all quoted in the Changelog file.
=head1 LICENSE
pgBadger is free software distributed under the PostgreSQL Licence.
Copyright (c) 2012-2025, Gilles Darold
A modified version of the SQL::Beautify Perl Module is embedded in pgBadger
with copyright (C) 2009 by Jonas Kramer and is published under the terms of
the Artistic License 2.0.
pgbadger-13.1/pgbadger 0000775 0000000 0000000 00006144265 14765535576 0014763 0 ustar 00root root 0000000 0000000 #!/usr/bin/env perl
#------------------------------------------------------------------------------
#
# pgBadger - Advanced PostgreSQL log analyzer
#
# This program is open source, licensed under the PostgreSQL Licence.
# For license terms, see the LICENSE file.
#------------------------------------------------------------------------------
#
# Settings in postgresql.conf
#
# You should enable SQL query logging with log_min_duration_statement >= 0
# With stderr output
# Log line prefix should be: log_line_prefix = '%t [%p]: '
# Log line prefix should be: log_line_prefix = '%t [%p]: user=%u,db=%d '
# Log line prefix should be: log_line_prefix = '%t [%p]: db=%d,user=%u '
# If you need report per client Ip adresses you can add client=%h or remote=%h
# pgbadger will also recognized the following form:
# log_line_prefix = '%t [%p]: db=%d,user=%u,client=%h '
# or
# log_line_prefix = '%t [%p]: user=%u,db=%d,remote=%h '
# With syslog output
# Log line prefix should be: log_line_prefix = 'db=%d,user=%u '
#
# Additional information that could be collected and reported
# log_checkpoints = on
# log_connections = on
# log_disconnections = on
# log_lock_waits = on
# log_temp_files = 0
# log_autovacuum_min_duration = 0
#------------------------------------------------------------------------------
use vars qw($VERSION);
use strict qw(vars subs);
use Getopt::Long qw(:config no_ignore_case bundling);
use IO::File;
use Benchmark;
use File::Basename;
use Storable qw(store_fd fd_retrieve);
use Time::Local qw(timegm_nocheck timelocal_nocheck timegm timelocal);
use POSIX qw(locale_h sys_wait_h _exit strftime);
setlocale(LC_NUMERIC, '');
setlocale(LC_ALL, 'C');
use File::Spec;
use File::Temp qw/ tempfile /;
use IO::Handle;
use IO::Pipe;
use FileHandle;
use Socket;
use constant EBCDIC => "\t" ne "\011";
use Encode qw(encode decode);
$VERSION = '13.1';
$SIG{'CHLD'} = 'DEFAULT';
my $TMP_DIR = File::Spec->tmpdir() || '/tmp';
my %RUNNING_PIDS = ();
my @tempfiles = ();
my $parent_pid = $$;
my $interrupt = 0;
my $tmp_last_parsed = '';
my $tmp_dblist = '';
my @SQL_ACTION = ('SELECT', 'INSERT', 'UPDATE', 'DELETE', 'COPY FROM', 'COPY TO', 'CTE', 'DDL', 'TCL', 'CURSOR');
my @LATENCY_PERCENTILE = sort {$a <=> $b} (99,95,90);
my $graphid = 1;
my $NODATA = '
NO DATASET
';
my $MAX_QUERY_LENGTH = 25000;
my $terminate = 0;
my %CACHE_DNS = ();
my $DNSLookupTimeout = 1; # (in seconds)
my $EXPLAIN_URL = 'https://explain.depesz.com/';
my $EXPLAIN_POST = qq{};
my $PID_DIR = $TMP_DIR;
my $PID_FILE = undef;
my %DBLIST = ();
my $DBALL = 'postgres';
my $LOG_EOL_TYPE = 'LF';
# Factor used to estimate the total size of compressed file
# when real size can not be obtained (bz2 or remote files)
my $BZ_FACTOR = 30;
my $GZ_FACTOR = 15;
my $XZ_FACTOR = 18;
my @E2A = (
0, 1, 2, 3,156, 9,134,127,151,141,142, 11, 12, 13, 14, 15,
16, 17, 18, 19,157, 10, 8,135, 24, 25,146,143, 28, 29, 30, 31,
128,129,130,131,132,133, 23, 27,136,137,138,139,140, 5, 6, 7,
144,145, 22,147,148,149,150, 4,152,153,154,155, 20, 21,158, 26,
32,160,226,228,224,225,227,229,231,241,162, 46, 60, 40, 43,124,
38,233,234,235,232,237,238,239,236,223, 33, 36, 42, 41, 59, 94,
45, 47,194,196,192,193,195,197,199,209,166, 44, 37, 95, 62, 63,
248,201,202,203,200,205,206,207,204, 96, 58, 35, 64, 39, 61, 34,
216, 97, 98, 99,100,101,102,103,104,105,171,187,240,253,254,177,
176,106,107,108,109,110,111,112,113,114,170,186,230,184,198,164,
181,126,115,116,117,118,119,120,121,122,161,191,208, 91,222,174,
172,163,165,183,169,167,182,188,189,190,221,168,175, 93,180,215,
123, 65, 66, 67, 68, 69, 70, 71, 72, 73,173,244,246,242,243,245,
125, 74, 75, 76, 77, 78, 79, 80, 81, 82,185,251,252,249,250,255,
92,247, 83, 84, 85, 86, 87, 88, 89, 90,178,212,214,210,211,213,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57,179,219,220,217,218,159
);
if (EBCDIC && ord('^') == 106) { # as in the BS2000 posix-bc coded character set
$E2A[74] = 96; $E2A[95] = 159; $E2A[106] = 94; $E2A[121] = 168;
$E2A[161] = 175; $E2A[173] = 221; $E2A[176] = 162; $E2A[186] = 172;
$E2A[187] = 91; $E2A[188] = 92; $E2A[192] = 249; $E2A[208] = 166;
$E2A[221] = 219; $E2A[224] = 217; $E2A[251] = 123; $E2A[253] = 125;
$E2A[255] = 126;
}
elsif (EBCDIC && ord('^') == 176) { # as in codepage 037 on os400
$E2A[21] = 133; $E2A[37] = 10; $E2A[95] = 172; $E2A[173] = 221;
$E2A[176] = 94; $E2A[186] = 91; $E2A[187] = 93; $E2A[189] = 168;
}
my $pgbadger_logo =
'';
my $pgbadger_ico =
'data:image/x-icon;base64,
AAABAAEAIyMQAAEABAA8BAAAFgAAACgAAAAjAAAARgAAAAEABAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAgAAGRsZACgqKQA2OTcASEpJAFpdWwBoa2kAeHt5AImMigCeoZ8AsLOxAMTHxQDR1NIA
5enmAPv+/AAAAAAA///////////////////////wAAD///////////H///////////AAAP//////
//9Fq7Yv////////8AAA////////8V7u7qD////////wAAD///////8B7qWN5AL///////AAAP//
///y8Avrc3rtMCH/////8AAA/////xABvbAAAJ6kAA/////wAAD////wAG5tQAAADp6RAP////AA
AP//MQBd7C2lRESOWe5xAD//8AAA//8APO7iC+7e7u4A3uxwBf/wAAD/9Aju7iAAvu7u0QAN7ukA
7/AAAP/wCe7kAAAF7ugAAAHO6xD/8AAA//AK7CAAAAHO1AAAABnrEP/wAAD/8ArAAAAAAc7kAAAA
AIwQ//AAAP/wCjAAAAAC3uQAAAAAHBCf8AAA//AIEBVnIATu5gAXZhAFEP/wAAD/8AIAqxdwBu7p
AFoX0QIQ//AAAP/wAAPsBCAL7u4QBwfmAAD/8AAA//AAA8owAC7u7lAAKbYAAJ/wAAD/8AAAAAAA
fu7uwAAAAAAA//AAAP/wAAAAAADu7u7jAAAAAAD/8AAA//AAAAAABe7u7uoAAAAAAP/wAAD/8AAA
AAAL7u7u7QAAAAAAn/AAAP/wAAAAAB3u7u7uYAAAAAD/8AAA//MAAAAATu7u7u6QAAAAAP/wAAD/
/wAAAAAM7u7u7TAAAAAD//AAAP//IQAAAAKu7u7UAAAAAB//8AAA////IAAAAAju7BAAAAAP///w
AAD////2AAA1je7ulUAAA/////AAAP/////xEAnO7u7pIAH/////8AAA//////9CABju6iACP///
///wAAD////////wAAggAP////////AAAP////////8wAAA/////////8AAA///////////w////
///////wAAD///////////////////////AAAP/////gAAAA//+//+AAAAD//Af/4AAAAP/4A//g
AAAA//AA/+AAAAD/oAA/4AAAAP8AAB/gAAAA/gAAD+AAAADwAAAB4AAAAPAAAADgAAAA4AAAAGAA
AADgAAAA4AAAAOAAAADgAAAA4AAAAOAAAADgAAAAYAAAAOAAAADgAAAA4AAAAOAAAADgAAAA4AAA
AOAAAABgAAAA4AAAAOAAAADgAAAA4AAAAOAAAADgAAAA4AAAAGAAAADgAAAA4AAAAOAAAADgAAAA
8AAAAOAAAADwAAAB4AAAAPwAAAfgAAAA/gAAD+AAAAD/gAA/4AAAAP/AAH/gAAAA//gD/+AAAAD/
/Af/4AAAAP//v//gAAAA/////+AAAAA
';
my %CLASS_ERROR_CODE = (
'00' => 'Successful Completion',
'01' => 'Warning',
'02' => 'No Data (this is also a warning class per the SQL standard)',
'03' => 'SQL Statement Not Yet Complete',
'08' => 'Connection Exception',
'09' => 'Triggered Action Exception',
'0A' => 'Feature Not Supported',
'0B' => 'Invalid Transaction Initiation',
'0F' => 'Locator Exception',
'0L' => 'Invalid Grantor',
'0P' => 'Invalid Role Specification',
'0Z' => 'Diagnostics Exception',
'20' => 'Case Not Found',
'21' => 'Cardinality Violation',
'22' => 'Data Exception',
'23' => 'Integrity Constraint Violation',
'24' => 'Invalid Cursor State',
'25' => 'Invalid Transaction State',
'26' => 'Invalid SQL Statement Name',
'27' => 'Triggered Data Change Violation',
'28' => 'Invalid Authorization Specification',
'2B' => 'Dependent Privilege Descriptors Still Exist',
'2D' => 'Invalid Transaction Termination',
'2F' => 'SQL Routine Exception',
'34' => 'Invalid Cursor Name',
'38' => 'External Routine Exception',
'39' => 'External Routine Invocation Exception',
'3B' => 'Savepoint Exception',
'3D' => 'Invalid Catalog Name',
'3F' => 'Invalid Schema Name',
'40' => 'Transaction Rollback',
'42' => 'Syntax Error or Access Rule Violation',
'44' => 'WITH CHECK OPTION Violation',
'53' => 'Insufficient Resources',
'54' => 'Program Limit Exceeded',
'55' => 'Object Not In Prerequisite State',
'57' => 'Operator Intervention',
'58' => 'System Error (errors external to PostgreSQL itself)',
'72' => 'Snapshot Failure',
'F0' => 'Configuration File Error',
'HV' => 'Foreign Data Wrapper Error (SQL/MED)',
'P0' => 'PL/pgSQL Error',
'XX' => 'Internal Error',
);
####
# method used to fork as many child as wanted
##
sub spawn
{
my $coderef = shift;
unless (@_ == 0 && $coderef && ref($coderef) eq 'CODE') {
print "usage: spawn CODEREF";
exit 0;
}
my $pid;
if (!defined($pid = fork)) {
print STDERR "ERROR: cannot fork: $!\n";
return;
} elsif ($pid) {
$RUNNING_PIDS{$pid} = $pid;
return; # the parent
}
# the child -- go spawn
$< = $>;
$( = $); # suid progs only
exit &$coderef();
}
# Command line options
my $journalctl_cmd = '';
my $zcat_cmd = 'gunzip -c';
my $zcat = $zcat_cmd;
my $bzcat = 'bunzip2 -c';
my $lz4cat = 'lz4cat';
my $ucat = 'unzip -p';
my $xzcat = 'xzcat';
my $zstdcat = 'zstdcat';
my $gzip_uncompress_size = "gunzip -l \"%f\" | grep -E '^\\s*[0-9]+' | awk '{print \$2}'";
# lz4 archive can only contain one file.
# Original size can be retrieved only if --content-size has been used for compression
# it seems lz4 send output to stderr so redirect to stdout
my $lz4_uncompress_size = " lz4 -v -c --list %f 2>&1 |tail -n 2|head -n1 | awk '{print \$6}'";
my $zip_uncompress_size = "unzip -l %f | awk '{if (NR==4) print \$1}'";
my $xz_uncompress_size = "xz --robot -l %f | grep totals | awk '{print \$5}'";
my $zstd_uncompress_size = "zstd -v -l %f |grep Decompressed | awk -F\"[ (]*\" '{print \$5}'";
my $format = '';
my @outfiles = ();
my $outdir = '';
my $incremental = '';
my $extra_files = 0;
my $help = '';
my $ver = '';
my @dbname = ();
my @dbuser = ();
my @dbclient = ();
my @dbappname = ();
my @exclude_user = ();
my @exclude_appname = ();
my @exclude_db = ();
my @exclude_client = ();
my @exclude_line = ();
my $ident = '';
my $top = 0;
my $sample = 3;
my $extension = '';
my $maxlength = 100000;
my $graph = 1;
my $nograph = 0;
my $debug = 0;
my $noprettify = 0;
my $from = '';
my $to = '';
my $from_hour = '';
my $to_hour = '';
my $quiet = 0;
my $progress = 1;
my $error_only = 0;
my @exclude_query = ();
my @exclude_queryid = ();
my @exclude_time = ();
my @include_time = ();
my $exclude_file = '';
my @include_query = ();
my $include_file = '';
my $disable_error = 0;
my $disable_hourly = 0;
my $disable_type = 0;
my $disable_query = 0;
my $disable_session = 0;
my $disable_connection = 0;
my $disable_lock = 0;
my $disable_temporary = 0;
my $disable_checkpoint = 0;
my $disable_autovacuum = 0;
my $avg_minutes = 5;
my $histo_avg_minutes = 60;
my $last_parsed = '';
my $report_title = '';
my $log_line_prefix = '';
my $compiled_prefix = '';
my $project_url = 'http://pgbadger.darold.net/';
my $t_min = 0;
my $t_max = 0;
my $remove_comment = 0;
my $select_only = 0;
my $queue_size = 0;
my $job_per_file = 0;
my $charset = 'utf-8';
my $csv_sep_char = ',';
my %current_sessions = ();
my %pgb_current_sessions = ();
my $incr_date = '';
my $last_incr_date = '';
my $anonymize = 0;
my $noclean = 0;
my $retention = 0;
my $dns_resolv = 0;
my $nomultiline = 0;
my $noreport = 0;
my $log_duration = 0;
my $logfile_list = '';
my $enable_checksum = 0;
my $timezone = 0;
my $opt_timezone = 0;
my $pgbouncer_only = 0;
my $rebuild = 0;
my $week_start_monday = 0;
my $iso_week_number = 0;
my $use_sessionid_as_pid = 0;
my $dump_normalized_only = 0;
my $log_timezone = 0;
my $opt_log_timezone = 0;
my $json_prettify = 0;
my $report_per_database = 0;
my $html_outdir = '';
my $param_size_limit = 24;
my $month_report = 0;
my $day_report = 0;
my $noexplain = 0;
my $log_command = '';
my $wide_char = 0;
my $noweekreport = 0;
my $query_numbering = 0;
my $keep_comments = 0;
my $no_progessbar = 0;
my $NUMPROGRESS = 10;
my @DIMENSIONS = (800, 300);
my $RESRC_URL = '';
my $img_format = 'png';
my @log_files = ();
my %prefix_vars = ();
my $q_prefix = '';
my @prefix_q_params = ();
my %last_execute_stmt = ();
my $disable_process_title = 0;
my $dump_all_queries = 0;
my $dump_raw_csv = 0;
my $header_done = 0;
my @include_pid = ();
my @include_session = ();
my $compress_extensions = qr/\.(zip|gz|xz|bz2|lz4|zst)$/i;
my $remote_host = '';
my $ssh_command = '';
my $ssh_bin = 'ssh';
my $ssh_port = 22;
my $ssh_identity = '';
my $ssh_user = '';
my $ssh_timeout = 10;
my $ssh_options = "-o ConnectTimeout=$ssh_timeout -o PreferredAuthentications=hostbased,publickey";
my $force_sample = 0;
my $nofork = 0;
my $curl_command = 'curl -k -s ';
my $sql_prettified = pgFormatter::Beautify->new('colorize' => 1, 'format' => 'html', 'uc_keywords' => 0);
# Flag for logs using UTC, in this case we don't autodetect the timezone
my $isUTC = 0;
# Do not display data in pie where percentage is lower than this value
# to avoid label overlapping.
my $pie_percentage_limit = 2;
# Get the decimal separator
my $n = 5 / 2;
my $num_sep = ',';
$num_sep = ' ' if ($n =~ /,/);
# Set iso datetime pattern
my $time_pattern = qr/(\d{4})-(\d{2})-(\d{2})[\sT](\d{2}):(\d{2}):(\d{2})/;
# Inform the parent that it should stop iterate on parsing other files
sub stop_parsing
{
&logmsg('DEBUG', "Received interrupt signal");
$interrupt = 1;
}
# With multiprocess we need to wait for all children
sub wait_child
{
my $sig = shift;
$interrupt = 2;
print STDERR "Received terminating signal ($sig).\n";
1 while wait != -1;
$SIG{INT} = \&wait_child;
$SIG{TERM} = \&wait_child;
foreach my $f (@tempfiles)
{
unlink("$f->[1]") if (-e "$f->[1]");
}
if ($report_per_database)
{
unlink("$tmp_dblist");
}
if ($last_parsed && -e "$tmp_last_parsed")
{
unlink("$tmp_last_parsed");
}
if ($last_parsed && -e "$last_parsed.tmp")
{
unlink("$last_parsed.tmp");
}
if (-e "$PID_FILE")
{
unlink("$PID_FILE");
}
_exit(2);
}
$SIG{INT} = \&wait_child;
$SIG{TERM} = \&wait_child;
if ($^O !~ /MSWin32|dos/i) {
$SIG{USR2} = \&stop_parsing;
} else {
$nofork = 1;
}
$| = 1;
my $histogram_query = '';
my $histogram_session = '';
# get the command line parameters
my $result = GetOptions(
"a|average=i" => \$avg_minutes,
"A|histo-average=i" => \$histo_avg_minutes,
"b|begin=s" => \$from,
"c|dbclient=s" => \@dbclient,
"C|nocomment!" => \$remove_comment,
"d|dbname=s" => \@dbname,
"D|dns-resolv!" => \$dns_resolv,
"e|end=s" => \$to,
"E|explode!" => \$report_per_database,
"f|format=s" => \$format,
"G|nograph!" => \$nograph,
"h|help!" => \$help,
"H|html-outdir=s" => \$html_outdir,
"i|ident=s" => \$ident,
"I|incremental!" => \$incremental,
"j|jobs=i" => \$queue_size,
"J|Jobs=i" => \$job_per_file,
"l|last-parsed=s" => \$last_parsed,
"L|logfile-list=s" => \$logfile_list,
"m|maxlength=i" => \$maxlength,
"M|no-multiline!" => \$nomultiline,
"N|appname=s" => \@dbappname,
"o|outfile=s" => \@outfiles,
"O|outdir=s" => \$outdir,
"p|prefix=s" => \$log_line_prefix,
"P|no-prettify!" => \$noprettify,
"q|quiet!" => \$quiet,
"Q|query-numbering!" => \$query_numbering,
"r|remote-host=s" => \$remote_host,
'R|retention=i' => \$retention,
"s|sample=i" => \$sample,
"S|select-only!" => \$select_only,
"t|top=i" => \$top,
"T|title=s" => \$report_title,
"u|dbuser=s" => \@dbuser,
"U|exclude-user=s" => \@exclude_user,
"v|verbose!" => \$debug,
"V|version!" => \$ver,
"w|watch-mode!" => \$error_only,
"W|wide-char!" => \$wide_char,
"x|extension=s" => \$extension,
"X|extra-files!" => \$extra_files,
"z|zcat=s" => \$zcat,
"Z|timezone=f" => \$opt_timezone,
"pie-limit=i" => \$pie_percentage_limit,
"image-format=s" => \$img_format,
"exclude-query=s" => \@exclude_query,
"exclude-queryid=s" => \@exclude_queryid,
"exclude-file=s" => \$exclude_file,
"exclude-db=s" => \@exclude_db,
"exclude-client=s" => \@exclude_client,
"exclude-appname=s" => \@exclude_appname,
"include-query=s" => \@include_query,
"exclude-line=s" => \@exclude_line,
"include-file=s" => \$include_file,
"disable-error!" => \$disable_error,
"disable-hourly!" => \$disable_hourly,
"disable-type!" => \$disable_type,
"disable-query!" => \$disable_query,
"disable-session!" => \$disable_session,
"disable-connection!" => \$disable_connection,
"disable-lock!" => \$disable_lock,
"disable-temporary!" => \$disable_temporary,
"disable-checkpoint!" => \$disable_checkpoint,
"disable-autovacuum!" => \$disable_autovacuum,
"charset=s" => \$charset,
"csv-separator=s" => \$csv_sep_char,
"include-time=s" => \@include_time,
"exclude-time=s" => \@exclude_time,
'ssh-command=s' => \$ssh_command,
'ssh-program=s' => \$ssh_bin,
'ssh-port=i' => \$ssh_port,
'ssh-identity=s' => \$ssh_identity,
'ssh-option=s' => \$ssh_options,
'ssh-user=s' => \$ssh_user,
'ssh-timeout=i' => \$ssh_timeout,
'anonymize!' => \$anonymize,
'noclean!' => \$noclean,
'noreport!' => \$noreport,
'log-duration!' => \$log_duration,
'enable-checksum!' => \$enable_checksum,
'journalctl=s' => \$journalctl_cmd,
'pid-dir=s' => \$PID_DIR,
'pid-file=s' => \$PID_FILE,
'rebuild!' => \$rebuild,
'pgbouncer-only!' => \$pgbouncer_only,
'start-monday!' => \$week_start_monday,
'iso-week-number!' => \$iso_week_number,
'normalized-only!' => \$dump_normalized_only,
'log-timezone=f' => \$opt_log_timezone,
'prettify-json!' => \$json_prettify,
'month-report=s' => \$month_report,
'day-report=s' => \$day_report,
'noexplain!' => \$noexplain,
'command=s' => \$log_command,
'no-week!' => \$noweekreport,
'explain-url=s' => \$EXPLAIN_URL,
'tempdir=s' => \$TMP_DIR,
'no-process-info!' => \$disable_process_title,
'dump-all-queries!' => \$dump_all_queries,
'keep-comments!' => \$keep_comments,
'no-progressbar!' => \$no_progessbar,
'dump-raw-csv!' => \$dump_raw_csv,
'include-pid=i' => \@include_pid,
'include-session=s' => \@include_session,
'histogram-query=s' => \$histogram_query,
'histogram-session=s' => \$histogram_session,
'no-fork' => \$nofork,
);
die "FATAL: use pgbadger --help\n" if (not $result);
# Force rebuild mode when a month report is asked
$rebuild = 1 if ($month_report);
$rebuild = 2 if ($day_report);
# Set report title
$report_title = &escape_html($report_title) if $report_title;
# Show version and exit if asked
if ($ver) {
print "pgBadger version $VERSION\n";
exit 0;
}
&usage() if ($help);
# Create temporary file directory if not exists
mkdir("$TMP_DIR") if (!-d "$TMP_DIR");
if (!-d "$TMP_DIR")
{
die("Can not use temporary directory $TMP_DIR.\n");
}
# Try to load Digest::MD5 when asked
if ($enable_checksum)
{
if (eval {require Digest::MD5;1} ne 1) {
die("Can not load Perl module Digest::MD5.\n");
} else {
Digest::MD5->import('md5_hex');
}
}
# Check if another process is already running
unless ($PID_FILE) {
$PID_FILE = $PID_DIR . '/pgbadger.pid';
}
if (-e "$PID_FILE")
{
my $is_running = 2;
if ($^O !~ /MSWin32|dos/i) {
eval { $is_running = `ps auwx | grep pgbadger | grep -v grep | wc -l`; chomp($is_running); };
}
if (!$@ && ($is_running <= 1)) {
unlink("$PID_FILE");
} else {
print "FATAL: another process is already started or remove the file, see $PID_FILE\n";
exit 3;
}
}
# Create pid file
if (open(my $out, '>', $PID_FILE))
{
print $out $$;
close($out);
}
else
{
print "FATAL: can't create pid file $PID_FILE, $!\n";
exit 3;
}
# Rewrite some command line arguments as lists
&compute_arg_list();
# If pgBadger must parse remote files set the ssh command
# If no user defined ssh command have been set
my $remote_command = '';
if ($remote_host && !$ssh_command) {
$remote_command = &set_ssh_command($ssh_command, $remote_host);
}
# Add journalctl command to the file list if not already found
if ($journalctl_cmd)
{
if (!grep(/^\Q$journalctl_cmd\E$/, @ARGV))
{
$journalctl_cmd .= " --output='short-iso'";
push(@ARGV, $journalctl_cmd);
}
}
# Add custom command to file list
if ($log_command)
{
if (!grep(/^\Q$log_command\E$/, @ARGV)) {
push(@ARGV, $log_command);
}
}
# Log files to be parsed are passed as command line arguments
my $empty_files = 1;
if ($#ARGV >= 0)
{
if (!$month_report) {
foreach my $file (@ARGV) {
push(@log_files, &set_file_list($file));
}
} elsif (!$outdir) {
$outdir = $ARGV[0];
}
}
if (!$incremental && $html_outdir) {
localdie("FATAL: parameter -H, --html-outdir can only be used with incremental mode.\n");
}
# Read list of log file to parse from a file
if ($logfile_list)
{
if (!-e $logfile_list) {
localdie("FATAL: logfile list $logfile_list must exist!\n");
}
my $in = undef;
if (not open($in, "<", $logfile_list)) {
localdie("FATAL: can not read logfile list $logfile_list, $!.\n");
}
my @files = <$in>;
close($in);
foreach my $file (@files)
{
chomp($file);
$file =~ s/\r//;
if ($file eq '-') {
localdie("FATAL: stdin input - can not be used with logfile list.\n");
}
push(@log_files, &set_file_list($file));
}
}
# Do not warn if all log files are empty
if (!$rebuild && $empty_files)
{
&logmsg('DEBUG', "All log files are empty, exiting...");
unlink("$PID_FILE");
exit 0;
}
# Logfile is a mandatory parameter when journalctl command is not set.
if ( !$rebuild && ($#log_files < 0) && !$journalctl_cmd && !$log_command)
{
if (!$quiet) {
localdie("FATAL: you must give a log file at command line parameter.\n\n", 4);
}
else
{
unlink("$PID_FILE");
exit 4;
}
}
if ($#outfiles >= 1 && ($dump_normalized_only || $dump_all_queries)) {
localdie("FATAL: dump of normalized queries can not ne used with multiple output.\n\n");
}
# Remove follow option from journalctl command to prevent infinit loop
if ($journalctl_cmd) {
$journalctl_cmd =~ s/(-f|--follow)\b//;
}
# Quiet mode is forced with progress bar
$progress = 0 if ($quiet || $no_progessbar);
# Set the default number minutes for queries and connections average
$avg_minutes ||= 5;
$avg_minutes = 60 if ($avg_minutes > 60);
$avg_minutes = 1 if ($avg_minutes < 1);
$histo_avg_minutes ||= 60;
$histo_avg_minutes = 60 if ($histo_avg_minutes > 60);
$histo_avg_minutes = 1 if ($histo_avg_minutes < 1);
my @avgs = ();
for (my $i = 0 ; $i < 60 ; $i += $avg_minutes) {
push(@avgs, sprintf("%02d", $i));
}
my @histo_avgs = ();
for (my $i = 0 ; $i < 60 ; $i += $histo_avg_minutes) {
push(@histo_avgs, sprintf("%02d", $i));
}
# Set the URL to submit explain plan
$EXPLAIN_POST = sprintf($EXPLAIN_POST, $EXPLAIN_URL);
# Set error like log level regex
my $parse_regex = qr/^(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|HINT|STATEMENT|CONTEXT|LOCATION)/;
my $full_error_regex = qr/^(WARNING|ERROR|FATAL|PANIC|DETAIL|HINT|STATEMENT|CONTEXT)/;
my $main_error_regex = qr/^(WARNING|ERROR|FATAL|PANIC)/;
my $main_log_regex = qr/^(LOG|WARNING|ERROR|FATAL|PANIC)/;
# Set syslog prefix regex
my $other_syslog_line = '';
my $pgbouncer_log_format = '';
my $pgbouncer_log_parse1 = '';
my $pgbouncer_log_parse2 = '';
my $pgbouncer_log_parse3 = '';
# Variable to store parsed data following the line prefix
my @prefix_params = ();
my @pgb_prefix_params = ();
my @pgb_prefix_parse1 = ();
my @pgb_prefix_parse2 = ();
my @pgb_prefix_parse3 = ();
# Force incremental mode when rebuild mode is used
if ($rebuild && !$incremental) {
print STDERR "WARNING: --rebuild require incremental mode, activating it.\n" if (!$month_report || !$day_report);
$incremental = 1;
}
&logmsg('DEBUG', "pgBadger version $VERSION." );
# set timezone to use
&set_timezone(1);
# Set default top query
$top ||= 20;
# Set output file
my $outfile = '';
$outfile = $outfiles[0] if ($#outfiles >= 0);
if (($dump_normalized_only || $dump_all_queries) && $outfile && $outfile !~ /\.txt$/){
localdie("FATAL: dump of normalized queries can be done in text output format, please use .txt extension.\n\n");
}
# With multiple output format we must use a temporary binary file
my $dft_extens = '';
if ($#outfiles >= 1)
{
# We can not have multiple output in incremental mode
if ($incremental)
{
localdie("FATAL: you can not use multiple output format with incremental mode.\n\n");
}
# Set temporary binary file.
$outfile = $TMP_DIR . "/pgbadger_tmp_$$.bin";
# Remove the default output format for the moment
# otherwise all dump will have the same output
$dft_extens = $extension;
$extension = '';
}
elsif ($#outfiles == -1)
{
$extension = 'txt' if ($dump_normalized_only || $dump_all_queries);
($extension) ? push(@outfiles, 'out.' . $extension) : push(@outfiles, 'out.html');
map { s/\.text/.txt/; } @outfiles;
}
# Set the default extension and output format, load JSON Perl module if required
# Force text output with normalized query list only and disable incremental report
# Set default filename of the output file
my ($current_out_file, $extens) = &set_output_extension($outdir, $outfile, $extension);
# Set default syslog ident name
$ident ||= 'postgres';
# Set default pie percentage limit or fix value
$pie_percentage_limit = 0 if ($pie_percentage_limit < 0);
$pie_percentage_limit = 2 if ($pie_percentage_limit eq '');
$pie_percentage_limit = 100 if ($pie_percentage_limit > 100);
# Set default download image format
$img_format = lc($img_format);
$img_format = 'jpeg' if ($img_format eq 'jpg');
$img_format = 'png' if ($img_format ne 'jpeg');
# Extract the output directory from outfile so that graphs will
# be created in the same directory
if ($current_out_file ne '-')
{
if (!$html_outdir && !$outdir)
{
my @infs = fileparse($current_out_file);
if ($infs[0] ne '')
{
$outdir = $infs[1];
}
else
{
# maybe a confusion between -O and -o
localdie("FATAL: output file $current_out_file is a directory, should be a file\nor maybe you want to use -O | --outdir option instead.\n");
}
}
elsif ($outdir && !-d "$outdir")
{
# An output directory has been passed as command line parameter
localdie("FATAL: $outdir is not a directory or doesn't exist.\n");
}
elsif ($html_outdir && !-d "$html_outdir")
{
# An HTML output directory has been passed as command line parameter
localdie("FATAL: $html_outdir is not a directory or doesn't exist.\n");
}
$current_out_file = basename($current_out_file);
$current_out_file = ($html_outdir || $outdir) . '/' . $current_out_file;
$current_out_file =~ s#//#/#g;
}
# Remove graph support if output is not html
$graph = 0 unless ($extens eq 'html' or $extens eq 'binary' or $extens eq 'json');
$graph = 0 if ($nograph);
# Set some default values
my $end_top = $top - 1;
$queue_size ||= 1;
$job_per_file ||= 1;
if ($^O =~ /MSWin32|dos/i)
{
if ( ($queue_size > 1) || ($job_per_file > 1) ) {
print STDERR "WARNING: parallel processing is not supported on this platform.\n";
}
$queue_size = 1;
$job_per_file = 1;
}
# Test file creation before going to parse log
my $tmpfh = new IO::File ">$current_out_file";
if (not defined $tmpfh) {
localdie("FATAL: can't write to $current_out_file, $!\n");
}
$tmpfh->close();
unlink($current_out_file) if (-e $current_out_file);
# -w and --disable-error can't go together
if ($error_only && $disable_error) {
localdie("FATAL: please choose between no event report and reporting events only.\n");
}
# Set default search pattern for database, user name, application name and host in log_line_prefix
my $regex_prefix_dbname = qr/(?:db|database)=([^,]*)/;
my $regex_prefix_dbuser = qr/(?:user|usr)=([^,]*)/;
my $regex_prefix_dbclient = qr/(?:client|remote|ip|host|connection_source)=([^,\(]*)/;
my $regex_prefix_dbappname = qr/(?:app|application|application_name)=([^,]*)/;
my $regex_prefix_queryid = qr/(?:queryid)=([^,]*)/;
my $regex_prefix_sqlstate = qr/(?:error_code|state|state_code)=([^,]*)/;
my $regex_prefix_backendtype = qr/(?:backend_type|btype)=([^,]*)/;
# Set pattern to look for query type
my $action_regex = qr/^[\s\(]*(DELETE|INSERT|UPDATE|SELECT|COPY|WITH|CREATE|DROP|ALTER|TRUNCATE|BEGIN|COMMIT|ROLLBACK|START|END|SAVEPOINT|DECLARE|CLOSE|FETCH|MOVE)/is;
# Loading excluded query from file if any
if ($exclude_file) {
open(my $in, '<', $exclude_file) or localdie("FATAL: can't read file $exclude_file: $!\n");
my @exclq = <$in>;
close($in);
chomp(@exclq);
foreach my $r (@exclq) {
$r =~ s/\r//;
&check_regex($r, '--exclude-file');
}
push(@exclude_query, @exclq);
}
# Testing regex syntax
if ($#exclude_query >= 0) {
foreach my $r (@exclude_query) {
&check_regex($r, '--exclude-query');
}
}
# Testing regex syntax
if ($#exclude_time >= 0) {
foreach my $r (@exclude_time) {
&check_regex($r, '--exclude-time');
}
}
#
# Testing regex syntax
if ($#include_time >= 0) {
foreach my $r (@include_time) {
&check_regex($r, '--include-time');
}
}
# Loading included query from file if any
if ($include_file) {
open(my $in, '<', $include_file) or localdie("FATAL: can't read file $include_file: $!\n");
my @exclq = <$in>;
close($in);
chomp(@exclq);
foreach my $r (@exclq) {
$r =~ s/\r//;
&check_regex($r, '--include-file');
}
push(@include_query, @exclq);
}
# Testing regex syntax
if ($#include_query >= 0) {
foreach my $r (@include_query) {
&check_regex($r, '--include-query');
}
}
# Check start/end date time
if ($from)
{
if ( $from =~ /^(\d{2}):(\d{2}):(\d{2})([.]\d+([+-]\d+)?)?$/)
{
# only time, trick around the date part
my $fractional_seconds = $4 || "0";
$from_hour = "$1:$2:$3.$fractional_seconds";
&logmsg('DEBUG', "Setting begin time to [$from_hour]" );
}
elsif( $from =~ /^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})([.]\d+([+-]\d+)?)?$/ )
{
my $fractional_seconds = $7 || "0";
$from = "$1-$2-$3 $4:$5:$6.$fractional_seconds";
&logmsg('DEBUG', "Setting begin datetime to [$from]" );
}
else
{
localdie("FATAL: bad format for begin datetime/time, should be yyyy-mm-dd hh:mm:ss.l+tz or hh:mm:ss.l+tz\n");
}
}
if ($to)
{
if ( $to =~ /^(\d{2}):(\d{2}):(\d{2})([.]\d+([+-]\d+)?)?$/)
{
# only time, trick around the date part
my $fractional_seconds = $4 || "0";
$to_hour = "$1:$2:$3.$fractional_seconds";
&logmsg('DEBUG', "Setting end time to [$to_hour]" );
}
elsif( $to =~ /^(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})([.]\d+([+-]\d+)?)?$/)
{
my $fractional_seconds = $7 || "0";
$to = "$1-$2-$3 $4:$5:$6.$fractional_seconds";
&logmsg('DEBUG', "Setting end time to [$to]" );
}
else
{
localdie("FATAL: bad format for ending datetime, should be yyyy-mm-dd hh:mm:ss.l+tz or hh:mm:ss.l+tz\n");
}
}
if ($from && $to && ($from gt $to)) {
localdie("FATAL: begin date is after end date!\n") ;
}
# Stores the last parsed line from log file to allow incremental parsing
my $LAST_LINE = '';
# Set the level of the data aggregator, can be minute, hour or day follow the
# size of the log file.
my $LEVEL = 'hour';
# Month names
my %month_abbr = (
'Jan' => '01', 'Feb' => '02', 'Mar' => '03', 'Apr' => '04', 'May' => '05', 'Jun' => '06',
'Jul' => '07', 'Aug' => '08', 'Sep' => '09', 'Oct' => '10', 'Nov' => '11', 'Dec' => '12'
);
my %abbr_month = (
'01' => 'Jan', '02' => 'Feb', '03' => 'Mar', '04' => 'Apr', '05' => 'May', '06' => 'Jun',
'07' => 'Jul', '08' => 'Aug', '09' => 'Sep', '10' => 'Oct', '11' => 'Nov', '12' => 'Dec'
);
# Inbounds of query times histogram
my @histogram_query_time = (0, 1, 5, 10, 25, 50, 100, 500, 1000, 10000);
# Inbounds of session times histogram
my @histogram_session_time = (0, 500, 1000, 30000, 60000, 600000, 1800000, 3600000, 28800000);
# Check histogram values user redefinition
if ($histogram_query)
{
if ($histogram_query =~ /[^0-9\s,]+/) {
die("Incorrect value for option --histogram_query\n");
}
@histogram_query_time = split(/\s*,\s*/, $histogram_query);
}
if ($histogram_session)
{
if ($histogram_session =~ /[^0-9\s,]+/) {
die("Incorrect value for option --histogram_session\n");
}
@histogram_session_time = split(/\s*,\s*/, $histogram_session);
}
# Where statistics are stored
my %overall_stat = ();
my %pgb_overall_stat = ();
my %overall_checkpoint = ();
my %top_slowest = ();
my %normalyzed_info = ();
my %error_info = ();
my %pgb_error_info = ();
my %pgb_pool_info = ();
my %logs_type = ();
my %errors_code = ();
my %per_minute_info = ();
my %pgb_per_minute_info = ();
my %lock_info = ();
my %tempfile_info = ();
my %cancelled_info = ();
my %connection_info = ();
my %pgb_connection_info = ();
my %database_info = ();
my %application_info = ();
my %user_info = ();
my %host_info = ();
my %session_info = ();
my %pgb_session_info = ();
my %conn_received = ();
my %checkpoint_info = ();
my %autovacuum_info = ();
my %autoanalyze_info = ();
my @graph_values = ();
my %cur_info = ();
my %cur_temp_info = ();
my %cur_plan_info = ();
my %cur_cancel_info = ();
my %cur_lock_info = ();
my $nlines = 0;
my %last_line = ();
my %pgb_last_line = ();
our %saved_last_line = ();
our %pgb_saved_last_line= ();
my %top_locked_info = ();
my %top_tempfile_info = ();
my %top_cancelled_info = ();
my %drawn_graphs = ();
my %cur_bind_info = ();
my %prepare_info = ();
my %bind_info = ();
# Global output filehandle
my $fh = undef;
my $t0 = Benchmark->new;
# Write resources files from __DATA__ section if they have not been already copied
# and return the HTML links to that files. If --extra-file is not used returns the
# CSS and JS code to be embeded in HTML files
my @jscode = &write_resources();
# Automatically set parameters with incremental mode
if ($incremental)
{
# In incremental mode an output directory must be set
if (!$html_outdir && !$outdir)
{
localdie("FATAL: you must specify an output directory with incremental mode, see -O or --outdir.\n")
}
# Ensure this is not a relative path
if ($outdir && dirname($outdir) eq '.')
{
localdie("FATAL3: output directory ($outdir) is not an absolute path.\n");
}
if ($html_outdir && dirname($html_outdir) eq '.')
{
localdie("FATAL: output HTML directory ($html_outdir) is not an absolute path.\n");
}
# Ensure that the directory already exists
if ($outdir && !-d $outdir)
{
localdie("FATAL: output directory $outdir does not exists.\n");
}
# Verify that the HTML outdir exixts when specified
if ($html_outdir && !-d $html_outdir)
{
localdie("FATAL: output HTML directory $html_outdir does not exists.\n");
}
# Set default last parsed file in incremental mode
if (!$last_parsed && $incremental)
{
$last_parsed = $outdir . '/LAST_PARSED';
}
$current_out_file = 'index.html';
# Set default output format
$extens = 'binary';
if ($rebuild && !$month_report && !$day_report)
{
# Look for directory where report must be generated again
my @build_directories = ();
# Find directories that shoud be rebuilt
unless(opendir(DIR, "$outdir"))
{
localdie("FATAL: can't opendir $outdir: $!\n");
}
my @dyears = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
foreach my $y (sort { $a <=> $b } @dyears)
{
unless(opendir(DIR, "$outdir/$y"))
{
localdie("FATAL: can't opendir $outdir/$y: $!\n");
}
my @dmonths = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
foreach my $m (sort { $a <=> $b } @dmonths)
{
unless(opendir(DIR, "$outdir/$y/$m"))
{
localdie("FATAL: can't opendir $outdir/$y/$m: $!\n");
}
my @ddays = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
foreach my $d (sort { $a <=> $b } @ddays)
{
unless(opendir(DIR, "$outdir/$y/$m/$d"))
{
localdie("FATAL: can't opendir $outdir/$y/$m/$d: $!\n");
}
my @binfiles = grep { $_ =~ /\.bin$/ } readdir(DIR);
closedir DIR;
push(@build_directories, "$y-$m-$d") if ($#binfiles >= 0);
}
}
}
&build_incremental_reports(@build_directories);
my $t2 = Benchmark->new;
my $td = timediff($t2, $t0);
&logmsg('DEBUG', "rebuilding reports took: " . timestr($td));
# Remove pidfile
unlink("$PID_FILE");
exit 0;
}
elsif ($month_report)
{
# Look for directory where cumulative report must be generated
my @build_directories = ();
# Get year+month as a path
$month_report =~ s#/#-#g;
my $month_path = $month_report;
$month_path =~ s#-#/#g;
if ($month_path !~ m#^\d{4}/\d{2}$#)
{
localdie("FATAL: invalid format YYYY-MM for --month-report option: $month_report");
}
&logmsg('DEBUG', "building month report into $outdir/$month_path");
# Find days directories that shoud be used to build the monthly report
unless(opendir(DIR, "$outdir/$month_path"))
{
localdie("FATAL: can't opendir $outdir/$month_path: $!\n");
}
my @ddays = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
foreach my $d (sort { $a <=> $b } @ddays)
{
unless(opendir(DIR, "$outdir/$month_path/$d"))
{
localdie("FATAL: can't opendir $outdir/$month_path/$d: $!\n");
}
my @binfiles = grep { $_ =~ /\.bin$/ } readdir(DIR);
closedir DIR;
push(@build_directories, "$month_report-$d") if ($#binfiles >= 0);
}
&build_month_reports($month_path, @build_directories);
my $t2 = Benchmark->new;
my $td = timediff($t2, $t0);
&logmsg('DEBUG', "building month report took: " . timestr($td));
# Remove pidfile
unlink("$PID_FILE");
exit 0;
}
elsif ($day_report)
{
# Look for directory where cumulative report must be generated
my @build_directories = ();
# Get year+month as a path
$day_report =~ s#/#-#g;
my $day_path = $day_report;
$day_path =~ s#-#/#g;
if ($day_path !~ m#^\d{4}/\d{2}\/\d{2}$#)
{
localdie("FATAL: invalid format YYYY-MM-DD for --day-report option: $day_report");
}
&logmsg('DEBUG', "building day report into $outdir/$day_path");
# Find days directories that shoud be used to build the monthly report
unless(opendir(DIR, "$outdir/$day_path"))
{
localdie("FATAL: can't opendir $outdir/$day_path: $!\n");
}
my @binfiles = grep { $_ =~ /\.bin$/ } readdir(DIR);
closedir DIR;
push(@build_directories, "$day_report") if ($#binfiles >= 0);
&build_day_reports($day_path, @build_directories);
my $t2 = Benchmark->new;
my $td = timediff($t2, $t0);
&logmsg('DEBUG', "building day report took: " . timestr($td));
# Remove pidfile
unlink("$PID_FILE");
exit 0;
}
}
else
{
# Extra files for resources are not allowed without incremental mode
$extra_files = 0;
}
# Reading last line parsed
if ($last_parsed && -e $last_parsed)
{
if (open(my $in, '<', $last_parsed))
{
my @content = <$in>;
close($in);
foreach my $line (@content)
{
chomp($line);
next if (!$line);
my ($datetime, $current_pos, $orig, @others) = split(/\t/, $line);
# Last parsed line with pgbouncer log starts with this keyword
if ($datetime eq 'pgbouncer')
{
$pgb_saved_last_line{datetime} = $current_pos;
$pgb_saved_last_line{current_pos} = $orig;
$pgb_saved_last_line{orig} = join("\t", @others);
}
else
{
$saved_last_line{datetime} = $datetime;
$saved_last_line{current_pos} = $current_pos;
$saved_last_line{orig} = $orig;
}
&logmsg('DEBUG', "Found log offset " . ($saved_last_line{current_pos} || $pgb_saved_last_line{current_pos}) . " in file $last_parsed");
}
# Those two log format must be read from start of the file
if ( ($format eq 'binary') || ($format eq 'csv') )
{
$saved_last_line{current_pos} = 0;
$pgb_saved_last_line{current_pos} = 0 if ($format eq 'binary');
}
}
else
{
localdie("FATAL: can't read last parsed line from $last_parsed, $!\n");
}
}
$tmp_last_parsed = 'tmp_' . basename($last_parsed) if ($last_parsed);
$tmp_last_parsed = "$TMP_DIR/$tmp_last_parsed";
$tmp_dblist = "$TMP_DIR/dblist.tmp";
###
### Explanation:
###
### Logic for the BINARY storage ($outdir) SHOULD be:
### If ('noclean')
### do nothing (keep everything)
### If (NO 'noclean') and (NO 'retention'):
### use an arbitrary retention duration of: 5 weeks
### remove BINARY files older than LAST_PARSED_MONTH-retention
### DO NOT CHECK for HTML file existence as OLD HTML files may be deleted by external tools
### which may lead to BINARY files NEVER deleted
### If (NO 'noclean') and ('retention'):
### remove BINARY files older than LAST_PARSED_MONTH-retention
### DO NOT CHECK for HTML file existence as OLD HTML files may be deleted by external tools
### which may lead to BINARY files NEVER deleted
###
### Logic for the HTML storage ($html_outdir || $outdir) SHOULD be:
### DO NOT check 'noclean' as this flag is dedicated to BINARY files
### If (NO 'retention'):
### do nothing (keep everything)
### If ('retention'):
### remove HTML folders/files older than LAST_PARSED_MONTH-retention (= y/m/d):
### days older than d in y/m/d
### months older than m in y/m/d
### years older than y in y/m/d
###
# Clear BIN/HTML storages in incremental mode
my @all_outdir = ();
push(@all_outdir, $outdir) if ($outdir);
push(@all_outdir, $html_outdir) if ($html_outdir);
### $retention_bin = 5 if (!$noclean && !$retention);
### $retention_bin = 0 if ($noclean && !$retention);
### $retention_bin = $retention if (!$noclean && $retention);
### $retention_bin = 0 if ($noclean && $retention);
### equivalent to:
my $retention_bin = $retention;
$retention_bin = 0 if ($noclean && $retention);
$retention_bin = 5 if (!$noclean && !$retention);
### $retention_html = 0 if (!$noclean && !$retention);
### $retention_html = 0 if ($noclean && !$retention);
### $retention_html = $retention if (!$noclean && $retention);
### $retention_html = $retention if ($noclean && $retention);
### equivalent to:
my $retention_html = $retention;
### We will handle noclean/!noclean and retention_xxx/!retention_xxx below
### Note: !retention_xxx equivalent to retention_xxx = 0
&logmsg('DEBUG', "BIN/HTML Retention cleanup: Initial cleanup flags - noclean=[$noclean] - retention_bin=[$retention_bin] - retention_html=[$retention_html] - saved_last_line{datetime}=[$saved_last_line{datetime}] - pgb_saved_last_line{datetime}=[$pgb_saved_last_line{datetime}] - all_outdir=[@all_outdir]");
if ( scalar(@all_outdir) && ($saved_last_line{datetime} || $pgb_saved_last_line{datetime}) )
{
foreach my $ret_dir (@all_outdir)
{
if (($saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /) ||
($pgb_saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /))
{
# Search the current week following the last parse date
my $limit_yw_bin = $1;
my $limit_yw_html = $1;
my $wn = &get_week_number($1, $2, $3);
# BIN: Case of year overlap
if (($wn - $retention_bin) < 1)
{
# Rewind to previous year
$limit_yw_bin--;
# Get number of last week of previous year, can be 52 or 53
my $prevwn = &get_week_number($limit_yw_bin, 12, 31);
# Add week number including retention_bin to the previous year
$limit_yw_bin .= sprintf("%02d", $prevwn - abs($wn - $retention_bin));
} else {
$limit_yw_bin .= sprintf("%02d", $wn - $retention_bin);
}
&logmsg('DEBUG', "BIN Retention cleanup: YearWeek Limit computation - YearWeek=<$limit_yw_bin> - This will help later removal");
# HTML: Case of year overlap
if (($wn - $retention_html) < 1)
{
# Rewind to previous year
$limit_yw_html--;
# Get number of last week of previous year, can be 52 or 53
my $prevwn = &get_week_number($limit_yw_html, 12, 31);
# Add week number including retention_html to the previous year
$limit_yw_html .= sprintf("%02d", $prevwn - abs($wn - $retention_html));
} else {
$limit_yw_html .= sprintf("%02d", $wn - $retention_html);
}
&logmsg('DEBUG', "HTML Retention cleanup: YearWeek Limit computation - YearWeek=<$limit_yw_html> - This will help later removal");
my @dyears = ();
if ( opendir(DIR, "$ret_dir") ) {
@dyears = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
} else {
&logmsg('ERROR', "BIN/HTML Retention cleanup: can't opendir $ret_dir: $!");
}
# Find obsolete weeks dir that shoud be cleaned
foreach my $y (sort { $a <=> $b } @dyears) {
my @weeks = ();
if ( opendir(DIR, "$ret_dir/$y") ) {
@weeks = grep { $_ =~ /^week-\d+$/ } readdir(DIR);
closedir DIR;
} else {
&logmsg('ERROR', "BIN/HTML Retention cleanup: can't opendir $ret_dir/$y: $!");
}
foreach my $w (sort { $a <=> $b } @weeks) {
$w =~ /^week-(\d+)$/;
if ( (!$noclean) && $retention_bin ) {
if ("$y$1" lt $limit_yw_bin) {
&logmsg('DEBUG', "BIN Retention cleanup: Removing obsolete week directory $ret_dir/$y/week-$1");
&cleanup_directory_bin("$ret_dir/$y/week-$1", 1);
}
}
if ( $retention_html ) {
if ("$y$1" lt $limit_yw_html) {
&logmsg('DEBUG', "HTML Retention cleanup: Removing obsolete week directory $ret_dir/$y/week-$1");
&cleanup_directory_html("$ret_dir/$y/week-$1", 1);
}
}
}
}
# Find obsolete months and days
foreach my $y (sort { $a <=> $b } @dyears) {
my @dmonths = ();
if ( opendir(DIR, "$ret_dir/$y") ) {
@dmonths = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
} else {
&logmsg('ERROR', "BIN/HTML Retention cleanup: can't opendir $ret_dir/$y: $!");
}
# Now remove the HTML monthly reports
if ( $retention_html ) {
foreach my $m (sort { $a <=> $b } @dmonths) {
my $diff_day = $retention_html * 7 * 86400;
my $lastday = 0;
if (($saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /) ||
($pgb_saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /)) {
$lastday = timelocal(1,1,1,$3,$2-1,$1-1900);
}
if ( $lastday ) {
my $lastday_minus_retention = $lastday - $diff_day;
my $lastday_minus_retention_Y = strftime('%Y', localtime($lastday_minus_retention));
my $lastday_minus_retention_M = strftime('%m', localtime($lastday_minus_retention));
my $lastday_minus_retention_prev_month_Y = $lastday_minus_retention_Y;
my $lastday_minus_retention_prev_month_M = $lastday_minus_retention_M - 1;
if ( $lastday_minus_retention_prev_month_M < 1 ) {
$lastday_minus_retention_prev_month_Y -= 1;
$lastday_minus_retention_prev_month_M = 12;
}
$lastday_minus_retention_prev_month_Y = sprintf("%04d", $lastday_minus_retention_prev_month_Y);
$lastday_minus_retention_prev_month_M = sprintf("%02d", $lastday_minus_retention_prev_month_M);
if ("$y$m" lt "$lastday_minus_retention_Y$lastday_minus_retention_M") {
&logmsg('DEBUG', "HTML Retention cleanup: Removing obsolete month directory $ret_dir/$y/$m");
&cleanup_directory_html("$ret_dir/$y/$m", 1);
}
}
}
}
# Now remove the corresponding days
foreach my $m (sort { $a <=> $b } @dmonths) {
my @ddays = ();
if ( opendir(DIR, "$ret_dir/$y/$m") ) {
@ddays = grep { $_ =~ /^\d+$/ } readdir(DIR);
closedir DIR;
} else {
&logmsg('ERROR', "BIN/HTML Retention cleanup: can't opendir $ret_dir/$y/$m: $!");
}
foreach my $d (sort { $a <=> $b } @ddays)
{
if ( (!$noclean) && $retention_bin ) {
# Remove obsolete days when we are in binary mode
# with noreport - there's no week-N directory
my $diff_day = $retention_bin * 7 * 86400;
my $oldday = timelocal(1,1,1,$d,$m-1,$y-1900);
my $lastday = $oldday;
if (($saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /) ||
($pgb_saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /)) {
$lastday = timelocal(1,1,1,$3,$2-1,$1-1900);
}
if (($lastday - $oldday) > $diff_day) {
&logmsg('DEBUG', "BIN Retention cleanup: Removing obsolete day directory $ret_dir/$y/$m/$d");
&cleanup_directory_bin("$ret_dir/$y/$m/$d", 1);
}
}
if ( $retention_html ) {
# Remove obsolete days when we are in binary mode
# with noreport - there's no week-N directory
my $diff_day = $retention_html * 7 * 86400;
my $oldday = timelocal(1,1,1,$d,$m-1,$y-1900);
my $lastday = $oldday;
if (($saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /) ||
($pgb_saved_last_line{datetime} =~ /^(\d+)\-(\d+)\-(\d+) /)) {
$lastday = timelocal(1,1,1,$3,$2-1,$1-1900);
}
if (($lastday - $oldday) > $diff_day) {
&logmsg('DEBUG', "HTML Retention cleanup: Removing obsolete day directory $ret_dir/$y/$m/$d");
&cleanup_directory_html("$ret_dir/$y/$m/$d", 1);
}
}
}
if ( rmdir("$ret_dir/$y/$m") ) {
&logmsg('DEBUG', "BIN/HTML Retention cleanup: Removing obsolete empty directory $ret_dir/$y/$m");
}
}
if ( rmdir("$ret_dir/$y") ) {
&logmsg('DEBUG', "BIN/HTML Retention cleanup: Removing obsolete empty directory $ret_dir/$y");
}
}
}
}
}
# Main loop reading log files
my $global_totalsize = 0;
my @given_log_files = ( @log_files );
chomp(@given_log_files);
# Store globaly total size for each log files
my %file_size = ();
foreach my $logfile ( @given_log_files )
{
$file_size{$logfile} = &get_file_size($logfile);
$global_totalsize += $file_size{$logfile} if ($file_size{$logfile} > 0);
}
# Verify that the file has not changed for incremental move
if (($incremental || $last_parsed) && !$remote_host)
{
my @tmpfilelist = ();
# Removed files that have already been parsed during previous runs
foreach my $f (@given_log_files)
{
if ($f eq '-')
{
&logmsg('DEBUG', "waiting for log entries from stdin.");
$saved_last_line{current_pos} = 0;
push(@tmpfilelist, $f);
}
elsif ($f =~ /\.bin$/)
{
&logmsg('DEBUG', "binary file \"$f\" as input, there is no log to parse.");
$saved_last_line{current_pos} = 0;
push(@tmpfilelist, $f);
}
elsif ( $journalctl_cmd && ($f eq $journalctl_cmd) )
{
my $since = '';
if ( ($journalctl_cmd !~ /--since|-S/) &&
($saved_last_line{datetime} =~ /^(\d+)-(\d+)-(\d+).(\d+):(\d+):(\d+)/) )
{
$since = " --since='$1-$2-$3 $4:$5:$6'";
}
&logmsg('DEBUG', "journalctl call will start since: $saved_last_line{datetime}");
my $new_journalctl_cmd = "$f$since";
push(@tmpfilelist, $new_journalctl_cmd);
$file_size{$new_journalctl_cmd} = $file_size{$f};
}
elsif ( $log_command && ($f eq $log_command) )
{
&logmsg('DEBUG', "custom command waiting for log entries from stdin.");
$saved_last_line{current_pos} = 0;
push(@tmpfilelist, $f);
}
else
{
# Auto detect log format for proper parsing
my $fmt = $format || 'stderr';
$fmt = autodetect_format($f, $file_size{$f});
$fmt ||= $format;
# Set regex to parse the log file
$fmt = set_parser_regex($fmt);
if (($fmt ne 'pgbouncer') && ($saved_last_line{current_pos} > 0))
{
my ($retcode, $msg) = &check_file_changed($f, $file_size{$f}, $fmt, $saved_last_line{datetime}, $saved_last_line{current_pos});
if (!$retcode)
{
&logmsg('DEBUG', "this file has already been parsed: $f, $msg");
}
else
{
push(@tmpfilelist, $f);
}
}
elsif (($fmt eq 'pgbouncer') && ($pgb_saved_last_line{current_pos} > 0))
{
my ($retcode, $msg) = &check_file_changed($f, $file_size{$f}, $fmt, $pgb_saved_last_line{datetime}, $pgb_saved_last_line{current_pos});
if (!$retcode)
{
&logmsg('DEBUG', "this file has already been parsed: $f, $msg");
}
else
{
push(@tmpfilelist, $f);
}
}
else
{
push(@tmpfilelist, $f);
}
}
}
@given_log_files = ();
push(@given_log_files, @tmpfilelist);
}
# Pipe used for progress bar in multiprocess
my $pipe = undef;
# Seeking to an old log position is not possible outside incremental mode
if (!$last_parsed || !exists $saved_last_line{current_pos}) {
$saved_last_line{current_pos} = 0;
$pgb_saved_last_line{current_pos} = 0;
}
if ($dump_all_queries)
{
$fh = new IO::File;
$fh->open($outfiles[0], '>') or localdie("FATAL: can't open output file $outfiles[0], $!\n");
}
####
# Start parsing all log files
####
# Number of running process
my $child_count = 0;
# Set max number of parallel process
my $parallel_process = 0;
# Open a pipe for interprocess communication
my $reader = new IO::Handle;
my $writer = new IO::Handle;
# Fork the logger process
if (!$nofork && $progress)
{
$pipe = IO::Pipe->new($reader, $writer);
$writer->autoflush(1);
spawn sub
{
&multiprocess_progressbar($global_totalsize);
};
}
# Initialise the list of reports to produce with the default report
# if $report_per_database is enabled there will be a report for each
# database. Information not related to a database (checkpoint, pgbouncer
# statistics, etc.) will be included in the default report which should
# be the postgres database to be read by the DBA of the PostgreSQL cluster.
$DBLIST{$DBALL} = 1;
# Prevent parallelism perl file to be higher than the number of files
$job_per_file = ($#given_log_files+1) if ( ($job_per_file > 1) && ($job_per_file > ($#given_log_files+1)) );
# Parse each log file following the multiprocess mode chosen (-j or -J)
my $current_log_file = '';
foreach my $logfile ( @given_log_files )
{
$current_log_file = $logfile if ($#given_log_files > 0);
# If we just want to build incremental reports from binary files
# just build the list of input directories with binary files
if ($incremental && $html_outdir && !$outdir)
{
my $incr_date = '';
my $binpath = '';
if ($logfile =~ /^(.*\/)(\d+)\/(\d+)\/(\d+)\/[^\/]+\.bin$/)
{
$binpath = $1;
$incr_date = "$2-$3-$4";
}
# Mark the directory as needing index update
if (open(my $out, '>>', "$last_parsed.tmp")) {
print $out "$binpath$incr_date\n";
close($out);
} else {
&logmsg('ERROR', "can't save last parsed line into $last_parsed.tmp, $!");
}
next;
}
# Confirm if we can use multiprocess for this file
my $pstatus = confirm_multiprocess($logfile);
if ($pstatus >= 0)
{
if ($pstatus = 1 && $job_per_file > 1)
{
$parallel_process = $job_per_file;
}
else
{
$parallel_process = $queue_size;
}
}
else
{
&logmsg('DEBUG', "parallel processing will not be used.");
$parallel_process = 1;
}
# Wait until a child dies if max parallel processes is reach
while ($child_count >= $parallel_process)
{
my $kid = waitpid(-1, WNOHANG);
if ($kid > 0)
{
$child_count--;
delete $RUNNING_PIDS{$kid};
}
sleep(1);
}
# Get log format of the current file
my $fmt = $format || 'stderr';
my $logfile_orig = $logfile;
if ($logfile ne '-' && !$journalctl_cmd && !$log_command)
{
$fmt = &autodetect_format($logfile, $file_size{$logfile});
$fmt ||= $format;
# Remove log format from filename if any
$logfile =~ s/:(stderr|csv|syslog|pgbouncer|jsonlog|logplex|rds|redshift)\d*$//i;
&logmsg('DEBUG', "pgBadger will use log format $fmt to parse $logfile.");
}
else
{
&logmsg('DEBUG', "Can not autodetect log format, assuming $fmt.");
}
# Set the timezone to use
&set_timezone();
# Set the regex to parse the log file following the format
$fmt = set_parser_regex($fmt);
# Do not use split method with remote and compressed files, stdin, custom or journalctl command
if ( ($parallel_process > 1) && ($queue_size > 1) &&
($logfile !~ $compress_extensions) &&
($logfile !~ /\.bin$/i) && ($logfile ne '-') &&
($logfile !~ /^(http[s]*|ftp[s]*|ssh):/i) &&
(!$journalctl_cmd || ($logfile !~ /\Q$journalctl_cmd\E/)) &&
(!$log_command || ($logfile !~ /\Q$log_command\E/))
)
{
# Create multiple processes to parse one log file by chunks of data
my @chunks = split_logfile($logfile, $file_size{$logfile_orig}, ($fmt eq 'pgbouncer') ? $pgb_saved_last_line{current_pos} : $saved_last_line{current_pos});
&logmsg('DEBUG', "The following boundaries will be used to parse file $logfile, " . join('|', @chunks));
for (my $i = 0; $i < $#chunks; $i++)
{
while ($child_count >= $parallel_process)
{
my $kid = waitpid(-1, WNOHANG);
if ($kid > 0)
{
$child_count--;
delete $RUNNING_PIDS{$kid};
}
sleep(1);
}
localdie("FATAL: Abort signal received when processing to next chunk\n") if ($interrupt == 2);
last if ($interrupt);
push(@tempfiles, [ tempfile('tmp_pgbadgerXXXX', SUFFIX => '.bin', DIR => $TMP_DIR, O_TEMPORARY => 1, UNLINK => 1 ) ]);
spawn sub
{
&process_file($logfile, $file_size{$logfile_orig}, $fmt, $tempfiles[-1]->[0], $chunks[$i], $chunks[$i+1], $i);
};
$child_count++;
}
}
else
{
# Start parsing one file per parallel process
if (!$nofork)
{
push(@tempfiles, [ tempfile('tmp_pgbadgerXXXX', SUFFIX => '.bin', DIR => $TMP_DIR, UNLINK => 1 ) ]);
spawn sub
{
&process_file($logfile, $file_size{$logfile_orig}, $fmt, $tempfiles[-1]->[0], ($fmt eq 'pgbouncer') ? $pgb_saved_last_line{current_pos} : $saved_last_line{current_pos});
};
$child_count++;
}
else
{
&process_file($logfile, $file_size{$logfile_orig}, $fmt, undef, ($fmt eq 'pgbouncer') ? $pgb_saved_last_line{current_pos} : $saved_last_line{current_pos});
}
}
localdie("FATAL: Abort signal received when processing next file\n") if ($interrupt == 2);
last if ($interrupt);
}
# Wait for all child processes to localdie except for the logger
# On Windows OS $progress is disabled so we don't go here
while (scalar keys %RUNNING_PIDS > $progress)
{
my $kid = waitpid(-1, WNOHANG);
if ($kid > 0) {
delete $RUNNING_PIDS{$kid};
}
sleep(1);
}
# Terminate the process logger
if (!$nofork)
{
foreach my $k (keys %RUNNING_PIDS)
{
kill('USR1', $k);
%RUNNING_PIDS = ();
}
# Clear previous statistics
&init_stats_vars();
}
# Load all data gathered by all the different processes
if (!$nofork)
{
foreach my $f (@tempfiles)
{
next if (!-e "$f->[1]" || -z "$f->[1]");
my $fht = new IO::File;
$fht->open("< $f->[1]") or localdie("FATAL: can't open temp file $f->[1], $!\n");
load_stats($fht);
$fht->close();
}
}
# Get last line parsed from all process
if ($last_parsed)
{
&logmsg('DEBUG', "Reading temporary last parsed line from $tmp_last_parsed");
if (open(my $in, '<', $tmp_last_parsed) )
{
while (my $line = <$in>)
{
chomp($line);
$line =~ s/\r//;
my ($d, $p, $l, @o) = split(/\t/, $line);
if ($d ne 'pgbouncer')
{
if (!$last_line{datetime} || ($d gt $last_line{datetime}))
{
$last_line{datetime} = $d;
$last_line{orig} = $l;
$last_line{current_pos} = $p;
}
}
else
{
$d = $p;
$p = $l;
$l = join("\t", @o);
if (!$pgb_last_line{datetime} || ($d gt $pgb_last_line{datetime}))
{
$pgb_last_line{datetime} = $d;
$pgb_last_line{orig} = $l;
$pgb_last_line{current_pos} = $p;
}
}
}
close($in);
}
unlink("$tmp_last_parsed");
}
# Save last line parsed
if ($last_parsed && ($last_line{datetime} || $pgb_last_line{datetime})
&& ($last_line{orig} || $pgb_last_line{orig}) )
{
&logmsg('DEBUG', "Saving last parsed line into $last_parsed");
if (open(my $out, '>', $last_parsed))
{
if ($last_line{datetime})
{
$last_line{current_pos} ||= 0;
print $out "$last_line{datetime}\t$last_line{current_pos}\t$last_line{orig}\n";
}
elsif ($saved_last_line{datetime})
{
$saved_last_line{current_pos} ||= 0;
print $out "$saved_last_line{datetime}\t$saved_last_line{current_pos}\t$saved_last_line{orig}\n";
}
if ($pgb_last_line{datetime})
{
$pgb_last_line{current_pos} ||= 0;
print $out "pgbouncer\t$pgb_last_line{datetime}\t$pgb_last_line{current_pos}\t$pgb_last_line{orig}\n";
}
elsif ($pgb_saved_last_line{datetime})
{
$pgb_saved_last_line{current_pos} ||= 0;
print $out "pgbouncer\t$pgb_saved_last_line{datetime}\t$pgb_saved_last_line{current_pos}\t$pgb_saved_last_line{orig}\n";
}
close($out);
}
else
{
&logmsg('ERROR', "can't save last parsed line into $last_parsed, $!");
}
}
if ($terminate)
{
unlink("$PID_FILE");
exit 2;
}
####
# Generates statistics output
####
my $t1 = Benchmark->new;
my $td = timediff($t1, $t0);
&logmsg('DEBUG', "the log statistics gathering took:" . timestr($td));
if ($dump_all_queries)
{
$fh->close();
# Remove pidfile and temporary file
unlink($tmp_dblist) if ($tmp_dblist);
unlink("$PID_FILE");
unlink("$last_parsed.tmp") if (-e "$last_parsed.tmp");
unlink($TMP_DIR . "/pgbadger_tmp_$$.bin") if ($#outfiles >= 1);
exit 0;
}
# Read the list of database we have proceeded in all child process
if ($report_per_database)
{
%DBLIST = ();
if (open(my $out, '<', "$tmp_dblist"))
{
my @data = <$out>;
foreach my $tmp (@data)
{
chomp($tmp);
my %dblist = split(/;/, $tmp);
foreach my $d (keys %dblist)
{
next if ($#dbname >= 0 and !grep(/^$d$/i, @dbname));
$DBLIST{$d} = 1;
$overall_stat{nlines}{$d} += $dblist{$d};
}
}
close($out);
&logmsg('DEBUG', "looking for list of database retrieved from log: " . join(',', keys %DBLIST));
}
else
{
&logmsg('ERROR', "can't read list of database from file $tmp_dblist, $!");
}
}
if ( !$incremental && ($#given_log_files >= 0) )
{
my $chld_running = 0;
foreach my $db (sort keys %DBLIST)
{
next if (!$db);
if ($nofork || $parallel_process <= 1) {
&gen_database_report($db);
}
else
{
while ($chld_running >= $parallel_process)
{
my $kid = waitpid(-1, WNOHANG);
if ($kid > 0)
{
$chld_running--;
delete $RUNNING_PIDS{$kid};
}
sleep(1);
}
spawn sub
{
&gen_database_report($db);
};
$chld_running++;
}
}
if (!$nofork && $parallel_process > 1)
{
while (scalar keys %RUNNING_PIDS > $progress)
{
my $kid = waitpid(-1, WNOHANG);
if ($kid > 0) {
delete $RUNNING_PIDS{$kid};
}
sleep(1);
}
}
}
elsif (!$incremental || !$noreport)
{
# Look for directory where report must be generated
my @build_directories = ();
if (-e "$last_parsed.tmp")
{
if (open(my $in, '<', "$last_parsed.tmp")) {
while (my $l = <$in>) {
chomp($l);
$l =~ s/\r//;
push(@build_directories, $l) if (!grep(/^$l$/, @build_directories));
}
close($in);
unlink("$last_parsed.tmp");
} else {
&logmsg('ERROR', "can't read file $last_parsed.tmp, $!");
}
&build_incremental_reports(@build_directories);
} else {
&logmsg('DEBUG', "no new entries in your log(s) since last run.");
}
}
my $t2 = Benchmark->new;
$td = timediff($t2, $t1);
&logmsg('DEBUG', "building reports took: " . timestr($td));
$td = timediff($t2, $t0);
&logmsg('DEBUG', "the total execution time took: " . timestr($td));
# Remove pidfile and temporary file
unlink($tmp_dblist);
unlink("$PID_FILE");
unlink("$last_parsed.tmp") if (-e "$last_parsed.tmp");
unlink($TMP_DIR . "/pgbadger_tmp_$$.bin") if ($#outfiles >= 1);
exit 0;
#-------------------------------------------------------------------------------
# Show pgBadger command line usage
sub usage
{
print qq{
Usage: pgbadger [options] logfile [...]
PostgreSQL log analyzer with fully detailed reports and graphs.
Arguments:
logfile can be a single log file, a list of files, or a shell command
returning a list of files. If you want to pass log content from stdin
use - as filename. Note that input from stdin will not work with csvlog.
Options:
-a | --average minutes : number of minutes to build the average graphs of
queries and connections. Default 5 minutes.
-A | --histo-average min: number of minutes to build the histogram graphs
of queries. Default 60 minutes.
-b | --begin datetime : start date/time for the data to be parsed in log
(either a timestamp or a time)
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
-D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log
(either a timestamp or a time)
-E | --explode : explode the main report by generating one report
per database. Global information not related to a
database is added to the postgres database report.
-f | --format logtype : possible values: syslog, syslog2, stderr, jsonlog,
csv, pgbouncer, logplex, rds and redshift. Use this
option when pgBadger is not able to detect the log
format.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-H | --html-outdir path: path to directory where HTML report must be written
in incremental mode, binary files stay on directory
defined with -O, --outdir option.
-i | --ident name : programname used as syslog ident. Default: postgres
-I | --incremental : use incremental mode, reports will be generated by
days in a separate directory, --outdir must be set.
-j | --jobs number : number of jobs to run at same time for a single log
file. Run as single by default or when working with
csvlog format.
-J | --Jobs number : number of log files to parse in parallel. Process
one file at a time by default.
-l | --last-parsed file: allow incremental log parsing by registering the
last datetime and line parsed. Useful if you want
to watch errors since last run or if you want one
report per day with a log rotated each week.
-L | --logfile-list file:file containing a list of log files to parse.
-m | --maxlength size : maximum length of a query, it will be restricted to
the given size. Default truncate size is $maxlength.
-M | --no-multiline : do not collect multiline statements to avoid garbage
especially on errors that generate a huge report.
-N | --appname name : only report on entries for given application name
-o | --outfile filename: define the filename for the output. Default depends
on the output format: out.html, out.txt, out.bin,
or out.json. This option can be used multiple times
to output several formats. To use json output, the
Perl module JSON::XS must be installed, to dump
output to stdout, use - as filename.
-O | --outdir path : directory where out files must be saved.
-p | --prefix string : the value of your custom log_line_prefix as
defined in your postgresql.conf. Only use it if you
aren't using one of the standard prefixes specified
in the pgBadger documentation, such as if your
prefix includes additional variables like client IP
or application name. MUST contain escape sequences
for time (%t, %m or %n) and processes (%p or %c).
See examples below.
-P | --no-prettify : disable SQL queries prettify formatter.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-Q | --query-numbering : add numbering of queries to the output when using
options --dump-all-queries or --normalized-only.
-r | --remote-host ip : set the host where to execute the cat command on
remote log file to parse the file locally.
-R | --retention N : number of weeks to keep in incremental mode. Defaults
to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directories are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-S | --select-only : only report SELECT queries.
-t | --top number : number of queries to store/display. Default: 20.
-T | --title string : change title of the HTML page report.
-u | --dbuser username : only report on entries for the given user.
-U | --exclude-user username : exclude entries for the specified user from
report. Can be used multiple time.
-v | --verbose : enable verbose or debug mode. Disabled by default.
-V | --version : show pgBadger version and exit.
-w | --watch-mode : only report errors just like logwatch could do.
-W | --wide-char : encode html output of queries into UTF8 to avoid
Perl message "Wide character in print".
-x | --extension : output format. Values: text, html, bin or json.
Default: html
-X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat, bzcat or unzip is not in your path.
-Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
Use this to adjust date/time in JavaScript graphs.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-file filename: path of the file that contains each regex to use
to exclude queries from the report. One regex per
line.
--include-query regex : any query that does not match the given regex will
be excluded from the report. You can use this
option multiple times. For example: "(tbl1|tbl2)".
--include-file filename: path of the file that contains each regex to the
queries to include from the report. One regex per
line.
--disable-error : do not generate error report.
--disable-hourly : do not generate hourly report.
--disable-type : do not generate report of queries by type, database
or user.
--disable-query : do not generate query reports (slowest, most
frequent, queries by users, by database, ...).
--disable-session : do not generate session report.
--disable-connection : do not generate connection report.
--disable-lock : do not generate lock report.
--disable-temporary : do not generate temporary report.
--disable-checkpoint : do not generate checkpoint/restartpoint report.
--disable-autovacuum : do not generate autovacuum report.
--charset : used to set the HTML charset to be used.
Default: utf-8.
--csv-separator : used to set the CSV field separator, default: ,
--exclude-time regex : any timestamp matching the given regex will be
excluded from the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--include-time regex : only timestamps matching the given regex will be
included in the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--exclude-db name : exclude entries for the specified database from
report. Example: "pg_dump". Can be used multiple
times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump". Can be used
multiple times.
--exclude-line regex : exclude any log entry that will match the given
regex. Can be used multiple times.
--exclude-client name : exclude log entries for the specified client ip.
Can be used multiple times.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
--noreport : no reports will be created in incremental mode.
--log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add an md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir path : set the path where the pid file must be stored.
Default /tmp
--pid-file file : set the name of the pid file to manage concurrent
execution of pgBadger. Default: pgbadger.pid
--rebuild : used to rebuild all html reports in incremental
output directories where there's binary data files.
--pgbouncer-only : only show PgBouncer-related menus in the header.
--start-monday : in incremental mode, calendar weeks start on
Sunday. Use this option to start on a Monday.
--iso-week-number : in incremental mode, calendar weeks start on
Monday and respect the ISO 8601 week number, range
01 to 53, where week 1 is the first week that has
at least 4 days in the new year.
--normalized-only : only dump all normalized queries to out.txt
--log-timezone +/-XX : Set the number of hours from GMT of the timezone
that must be used to adjust date/time read from
log file before beeing parsed. Using this option
makes log search with a date/time more difficult.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--prettify-json : use it if you want json output to be prettified.
--month-report YYYY-MM : create a cumulative HTML report over the specified
month. Requires incremental output directories and
the presence of all necessary binary data files
--day-report YYYY-MM-DD: create an HTML report over the specified day.
Requires incremental output directories and the
presence of all necessary binary data files
--noexplain : do not process lines generated by auto_explain.
--command CMD : command to execute to retrieve log entries on
stdin. pgBadger will open a pipe to the command
and parse log entries generated by the command.
--no-week : inform pgbadger to not build weekly reports in
incremental mode. Useful if it takes too much time.
--explain-url URL : use it to override the url of the graphical explain
tool. Default: $EXPLAIN_URL
--tempdir DIR : set directory where temporary files will be written
Default: File::Spec->tmpdir() || '/tmp'
--no-process-info : disable changing process title to help identify
pgbadger process, some system do not support it.
--dump-all-queries : dump all queries found in the log file replacing
bind parameters included in the queries at their
respective placeholders positions.
--keep-comments : do not remove comments from normalized queries. It
can be useful if you want to distinguish between
same normalized queries.
--no-progressbar : disable progressbar.
--dump-raw-csv : parse the log and dump the information into CSV
format. No further processing is done, no report.
--include-pid PID : only report events related to the session pid (\%p).
Can be used multiple time.
--include-session ID : only report events related to the session id (\%c).
Can be used multiple time.
--histogram-query VAL : use custom inbound for query times histogram.
Default inbound in milliseconds:
0,1,5,10,25,50,100,500,1000,10000
--histogram-session VAL : use custom inbound for session times histogram.
Default inbound in milliseconds:
0,500,1000,30000,60000,600000,1800000,3600000,28800000
--no-fork : do not fork any process, for debugging purpose.
pgBadger is able to parse a remote log file using a passwordless ssh connection.
Use -r or --remote-host to set the host IP address or hostname. There are also
some additional options to fully control the ssh connection.
--ssh-program ssh path to the ssh program to use. Default: ssh.
--ssh-port port ssh port to use for the connection. Default: 22.
--ssh-user username connection login name. Defaults to running user.
--ssh-identity file path to the identity file to use.
--ssh-timeout second timeout to ssh connection failure. Default: 10 sec.
--ssh-option options list of -o options to use for the ssh connection.
Options always used:
-o ConnectTimeout=\$ssh_timeout
-o PreferredAuthentications=hostbased,publickey
Log file to parse can also be specified using an URI, supported protocols are
http[s] and [s]ftp. The curl command will be used to download the file, and the
file will be parsed during download. The ssh protocol is also supported and will
use the ssh command like with the remote host use. See examples bellow.
Return codes:
0: on success
1: die on error
2: if it has been interrupted using ctr+c for example
3: the pid file already exists or can not be created
4: no log file was given at command line
Examples:
pgbadger /var/log/postgresql.log
pgbadger /var/log/postgres.log.2.gz /var/log/postgres.log.1.gz /var/log/postgres.log
pgbadger /var/log/postgresql/postgresql-2012-05-*
pgbadger --exclude-query="^(COPY|COMMIT)" /var/log/postgresql.log
pgbadger -b "2012-06-25 10:56:11" -e "2012-06-25 10:59:11" /var/log/postgresql.log
cat /var/log/postgres.log | pgbadger -
# Log line prefix with stderr log output
pgbadger --prefix '%t [%p]: user=%u,db=%d,client=%h' /pglog/postgresql-2012-08-21*
pgbadger --prefix '%m %u@%d %p %r %a : ' /pglog/postgresql.log
# Log line prefix with syslog log output
pgbadger --prefix 'user=%u,db=%d,client=%h,appname=%a' /pglog/postgresql-2012-08-21*
# Use my 8 CPUs to parse my 10GB file faster, much faster
pgbadger -j 8 /pglog/postgresql-10.1-main.log
Use URI notation for remote log file:
pgbadger http://172.12.110.1//var/log/postgresql/postgresql-10.1-main.log
pgbadger ftp://username\@172.12.110.14/postgresql-10.1-main.log
pgbadger ssh://username\@172.12.110.14:2222//var/log/postgresql/postgresql-10.1-main.log*
You can use together a local PostgreSQL log and a remote pgbouncer log file to parse:
pgbadger /var/log/postgresql/postgresql-10.1-main.log ssh://username\@172.12.110.14/pgbouncer.log
Reporting errors every week by cron job:
30 23 * * 1 /usr/bin/pgbadger -q -w /var/log/postgresql.log -o /var/reports/pg_errors.html
Generate report every week using incremental behavior:
0 4 * * 1 /usr/bin/pgbadger -q `find /var/log/ -mtime -7 -name "postgresql.log*"` -o /var/reports/pg_errors-`date +\\%F`.html -l /var/reports/pgbadger_incremental_file.dat
This supposes that your log file and HTML report are also rotated every week.
Or better, use the auto-generated incremental reports:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
will generate a report per day and per week.
In incremental mode, you can also specify the number of weeks to keep in the
reports:
/usr/bin/pgbadger --retention 2 -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can
use pgBadger as follow to exclude these periods from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
This will help avoid having COPY statements, as generated by pg_dump, on top of
the list of slowest queries. You can also use --exclude-appname "pg_dump" to
solve this problem in a simpler way.
You can also parse journalctl output just as if it was a log file:
pgbadger --journalctl 'journalctl -u postgresql-9.5'
or worst, call it from a remote host:
pgbadger -r 192.168.1.159 --journalctl 'journalctl -u postgresql-9.5'
you don't need to specify any log file at command line, but if you have other
PostgreSQL log files to parse, you can add them as usual.
To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
pgBadger also supports Heroku PostgreSQL logs using logplex format:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
this will stream Heroku PostgreSQL log to pgbadger through stdin.
pgBadger can auto detect RDS and cloudwatch PostgreSQL logs using
rds format:
pgbadger -f rds -o rds_out.html rds.log
Each CloudSQL Postgresql log is a fairly normal PostgreSQL log, but encapsulated
in JSON format. It is autodetected by pgBadger but in case you need to force
the log format use `jsonlog`:
pgbadger -f jsonlog -o cloudsql_out.html cloudsql.log
This is the same as with the jsonlog extension, the json format is different
but pgBadger can parse both formats.
pgBadger also supports logs produced by CloudNativePG Postgres operator for Kubernetes:
pgbadger -f jsonlog -o cnpg_out.html cnpg.log
To create a cumulative report over a month use command:
pgbadger --month-report 2919-05 /path/to/incremental/reports/
this will add a link to the month name into the calendar view in
incremental reports to look at report for month 2019 May.
Use -E or --explode if the reports were built using this option.
};
# Note that usage must be terminated by an extra newline
# to not break POD documentation at make time.
exit 0;
}
sub gen_database_report
{
my $db = shift;
foreach $outfile (@outfiles)
{
($current_out_file, $extens) = &set_output_extension($outdir, $outfile, $extension, $db);
$extens = $dft_extens if ($current_out_file eq '-' && $dft_extens);
if ($report_per_database)
{
&logmsg('LOG', "Ok, generating $extens report for database $db...");
}
else
{
&logmsg('LOG', "Ok, generating $extens report...");
}
$fh = new IO::File ">$current_out_file";
if (not defined $fh) {
localdie("FATAL: can't write to $current_out_file, $!\n");
}
if (($extens eq 'text') || ($extens eq 'txt'))
{
if ($error_only) {
&dump_error_as_text($db);
} else {
&dump_as_text($db);
}
}
elsif ($extens eq 'json')
{
if ($error_only) {
&dump_error_as_json($db);
} else {
&dump_as_json($db);
}
}
elsif ($extens eq 'binary')
{
&dump_as_binary($fh, $db);
}
else
{
&dump_as_html('.', $db);
}
$fh->close;
}
}
####
# Function used to validate the possibility to use process on the given
# file. Returns 1 when all multiprocess can be used, 0 when we can not
# use multiprocess on a single file (remore file) and -1 when parallel
# process can not be used too (binary mode).
####
sub confirm_multiprocess
{
my $file = shift;
if (!$nofork && $progress)
{
# Not supported on Windows
if ($queue_size > 1) {
&logmsg('DEBUG', "parallel processing is not supported on this plateform.");
}
return 0;
}
if ($remote_host || $file =~ /^(http[s]*|ftp[s]*|ssh):/) {
# Disable multi process when using ssh to parse remote log
if ($queue_size > 1) {
&logmsg('DEBUG', "parallel processing is not supported with remote files.");
}
return 0;
}
# Disable parallel processing in binary mode
if ($format eq 'binary') {
if (($queue_size > 1) || ($job_per_file > 1)) {
&logmsg('DEBUG', "parallel processing is not supported with binary format.") if (!$quiet);
}
return -1;
}
return 1;
}
sub set_ssh_command
{
my ($ssh_cmd, $rhost) = @_;
#http://www.domain.com:8080/file.log:format
#ftp://www.domain.com/file.log:format
#ssh:root@domain.com:file.log:format
# Extract format part
my $fmt = '';
if ($rhost =~ s/\|([a-z2]+)$//) {
$fmt = $1;
}
$ssh_cmd = $ssh_bin || 'ssh';
$ssh_cmd .= " -p $ssh_port" if ($ssh_port);
$ssh_cmd .= " -i $ssh_identity" if ($ssh_identity);
$ssh_cmd .= " $ssh_options" if ($ssh_options);
if ($ssh_user && $rhost !~ /\@/) {
$ssh_cmd .= " $ssh_user\@$rhost";
} else {
$ssh_cmd .= " $rhost";
}
if (wantarray()) {
return ($ssh_cmd, $fmt);
} else {
return $ssh_cmd;
}
}
sub set_file_list
{
my $file = shift;
my @lfiles = ();
my $file_orig = $file;
my $fmt = '';
# Remove log format from log file if any
if ($file =~ s/(:(?:stderr|csv|syslog|pgbouncer|jsonlog|logplex|rds|redshift)\d*)$//i)
{
$fmt = $1;
}
# Store the journalctl command as is we will create a pipe from this command
if ( $journalctl_cmd && ($file =~ m/\Q$journalctl_cmd\E/) )
{
push(@lfiles, $file_orig);
$empty_files = 0;
}
# Store the journalctl command as is we will create a pipe from this command
elsif ( $log_command && ($file =~ m/\Q$log_command\E/) )
{
push(@lfiles, $file_orig);
$empty_files = 0;
}
# Input from stdin
elsif ($file eq '-')
{
if ($logfile_list)
{
localdie("FATAL: stdin input - can not be used with logfile list (-L).\n");
}
push(@lfiles, $file_orig);
$empty_files = 0;
}
# For input from other sources than stdin
else
{
# if it is not a remote file store the file if it is not an empty file
if (!$remote_host && $file !~ /^(http[s]*|[s]*ftp|ssh):/i)
{
localdie("FATAL: logfile \"$file\" must exist!\n") if (not -f $file);
if (-z $file)
{
print "WARNING: file $file is empty\n" if (!$quiet);
next;
}
push(@lfiles, $file_orig);
$empty_files = 0;
}
# if this is a remote file extract the list of files using a ssh command
elsif ($file !~ /^(http[s]*|[s]*ftp):/i)
{
# Get files from remote host
if ($file !~ /^ssh:/)
{
my($filename, $dirs, $suffix) = fileparse($file);
&logmsg('DEBUG', "Looking for remote filename using command: $remote_command \"ls '$dirs'$filename\"");
my @rfiles = `$remote_command "ls '$dirs'$filename"`;
foreach my $f (@rfiles)
{
push(@lfiles, "$f$fmt");
}
}
elsif ($file =~ m#^ssh://([^\/]+)/(.*)#)
{
my $host_info = $1;
my $file = $2;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
my($filename, $dirs, $suffix) = fileparse($file);
&logmsg('DEBUG', "Looking for remote filename using command: $ssh $host_info \"ls '$dirs'$filename\"");
my @rfiles = `$ssh $host_info "ls '$dirs'$filename"`;
$dirs = '' if ( $filename ne '' ); #ls returns relative paths for an directory but absolute ones for a file or filename pattern
foreach my $f (@rfiles)
{
$host_info =~ s/-p (\d+) (.*)/$2:$1/;
push(@lfiles, "ssh://$host_info/$dirs$f$fmt");
}
}
$empty_files = 0;
}
# this is remote file extracted using http/ftp protocol, store the uri
else
{
push(@lfiles, $file_orig);
$empty_files = 0;
}
}
return @lfiles;
}
# Get inbounds of query times histogram
sub get_hist_inbound
{
my ($duration, @histogram) = @_;
for (my $i = 0; $i <= $#histogram; $i++) {
return $histogram[$i-1] if ($histogram[$i] > $duration);
}
return -1;
}
# Compile custom log line prefix prefix
sub set_parser_regex
{
my $fmt = shift;
@prefix_params = ();
@prefix_q_params = ();
if ($fmt eq 'pgbouncer')
{
$pgbouncer_log_format = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? (\d+) ([^\s]+) (.\-0x[0-9a-f\.]*): ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)(?:\(\d+\))??[:\d]* (.*)/;
@pgb_prefix_params = ('t_timestamp', 't_pid', 't_loglevel', 't_session_id', 't_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse1 = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? (\d+) ([^\s]+) (.\-0x[0-9a-f\.]+|[Ss]tats): (.*)/;
@pgb_prefix_parse1 = ('t_timestamp', 't_pid', 't_loglevel', 't_session_id', 't_query');
$pgbouncer_log_parse2 = qr/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d+(?: [A-Z\+\-\d]{3,6})? \d+ [^\s]+ .\-0x[0-9a-f\.]*: ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_parse2 = ('t_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse3 = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? (\d+) ([^\s]+) ([^:]+: .*)/;
@pgb_prefix_parse3 = ('t_timestamp', 't_pid', 't_loglevel', 't_query');
}
elsif ($fmt eq 'pgbouncer1')
{
$pgbouncer_log_format = qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]: (.\-0x[0-9a-f\.]*): ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_params = ('t_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_id', 't_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse1 = qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]: (.\-0x[0-9a-f\.]+|[Ss]tats): (.*)/;
@pgb_prefix_parse1 = ('t_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_id', 't_query');
$pgbouncer_log_parse2 = qr/^...\s+\d+\s\d+:\d+:\d+(?:\s[^\s]+)?\s[^\s]+\s[^\s\[]+\[\d+\]: .\-0x[0-9a-f\.]*: ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_parse2 = ('t_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse3 = qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]: ([^:]+: .*)/;
@pgb_prefix_parse3 = ('t_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_query');
}
elsif ($fmt eq 'pgbouncer2')
{
$pgbouncer_log_format = qr/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]: (.\-0x[0-9a-f\.]*): ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_params = ('t_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_id', 't_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse1 = qr/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]: (.\-0x[0-9a-f\.]+|[Ss]tats): (.*)/;
@pgb_prefix_parse1 = ('t_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_id', 't_query');
$pgbouncer_log_parse2 = qr/^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:.[^\s]+)?\s[^\s]+\s(?:[^\s]+\s)?(?:[^\s]+\s)?[^\s\[]+\[\d+\]: .\-0x[0-9a-f\.]*: ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_parse2 = ('t_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse3 = qr/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]: ([^:]+: .*)/;
@pgb_prefix_parse3 = ('t_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_query');
}
elsif ($fmt eq 'pgbouncer3')
{
$pgbouncer_log_format = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? \[(\d+)\] ([^\s]+) (.\-0x[0-9a-f\.]*): ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)(?:\(\d+\))??[:\d]* (.*)/;
@pgb_prefix_params = ('t_timestamp', 't_pid', 't_loglevel', 't_session_id', 't_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse1 = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? \[(\d+)\] ([^\s]+) (.\-0x[0-9a-f\.]+|[Ss]tats): (.*)/;
@pgb_prefix_parse1 = ('t_timestamp', 't_pid', 't_loglevel', 't_session_id', 't_query');
$pgbouncer_log_parse2 = qr/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d+(?: [A-Z\+\-\d]{3,6})? \[\d+\] [^\s]+ .\-0x[0-9a-f\.]*: ([0-9a-zA-Z\_\[\]\-\.]*)\/([0-9a-zA-Z\_\[\]\-\.]*)\@([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?(?:\(\d+\))?(?:\(\d+\))?[:\d]* (.*)/;
@pgb_prefix_parse2 = ('t_dbname', 't_dbuser', 't_client', 't_query');
$pgbouncer_log_parse3 = qr/^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+(?: [A-Z\+\-\d]{3,6})? \[(\d+)\] ([^\s]+) ([^:]+: .*)/;
@pgb_prefix_parse3 = ('t_timestamp', 't_pid', 't_loglevel', 't_query');
}
elsif ($log_line_prefix)
{
# Build parameters name that will be extracted from the prefix regexp
my %res = &build_log_line_prefix_regex($log_line_prefix);
my $llp = $res{'llp'};
@prefix_params = @{ $res{'param_list'} };
$q_prefix = $res{'q_prefix'};
@prefix_q_params = @{ $res{'q_param_list'} };
if ($fmt eq 'syslog')
{
$llp =
'^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(\.\d+)?(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?\s\[(\d+)(?:\-\d+)?\]\s*'
. $llp
. '\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)';
$compiled_prefix = qr/$llp/;
unshift(@prefix_params, 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_ms', 't_host', 't_ident', 't_pid', 't_session_line');
push(@prefix_params, 't_loglevel', 't_query');
$other_syslog_line = qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?\s\[(\d+)(?:\-\d+)?\]\s*(.*)/;
}
elsif ($fmt eq 'syslog2')
{
$fmt = 'syslog';
$llp =
'^(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?(?:\s\[(\d+)(?:\-\d+)?\])?\s*'
. $llp
. '\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)';
$compiled_prefix = qr/$llp/;
unshift(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_line');
push(@prefix_params, 't_loglevel', 't_query');
$other_syslog_line = qr/^(\d+-\d+)-(\d+)T(\d+):(\d+):(\d+)(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?(?:\s\[(\d+)(?:\-\d+)?\])?\s*(.*)/;
}
elsif ($fmt eq 'logplex')
{
# The output format of the heroku pg logs is as follows: timestamp app[dyno]: message
$llp =
'^(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)(\.\d+)?[+\-]\d{2}:\d{2}\s+(?:[^\s]+)?\s*app\[postgres\.(\d+)\][:]?\s+\[([^\]]+)\]\s+\[\d+\-\d+\]\s+'
. $llp
. '\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)';
$compiled_prefix = qr/$llp/;
unshift(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_ms', 't_pid', 't_dbname');
push(@prefix_params, 't_loglevel', 't_query');
$other_syslog_line = qr/^(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)(?:\.\d+)?[+\-]\d{2}:\d{2}\s+(?:[^\s]+)?\s*app\[postgres\.\d+\][:]?\s+\[([^\]]+)\]\s+\[(\d+)\-(\d+)\]\s*(.*)/;
}
elsif ($fmt =~ /^rds$/)
{
# The output format of the RDS pg logs is as follows: %t:%r:%u@%d:[%p]: message
# With Cloudwatch it is prefixed with another timestamp
$llp = '^' . $llp
. '(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)';
$compiled_prefix = qr/$llp/;
@prefix_params = ('t_timestamp', 't_client', 't_dbuser', 't_dbname', 't_pid', 't_loglevel', 't_query');
}
elsif ($fmt =~ /^redshift$/)
{
# Look at format of the AWS redshift pg logs, for example:
# '2020-03-07T16:09:43Z UTC [ db=dev user=rdsdb pid=16929 userid=1 xid=7382 ]'
$llp = '^' . $llp
. '(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)';
$compiled_prefix = qr/$llp/;
@prefix_params = ('t_timestamp', 't_dbname', 't_dbuser', 't_pid', 't_loglevel', 't_query');
}
elsif ($fmt eq 'stderr' || $fmt eq 'default' || $fmt eq 'jsonlog')
{
$fmt = 'stderr' if ($fmt ne 'jsonlog');
$llp = '^' . $llp . '\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)';
$compiled_prefix = qr/$llp/;
push(@prefix_params, 't_loglevel', 't_query');
}
}
elsif ($fmt eq 'syslog')
{
$compiled_prefix =
qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(\.\d+)?(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?\s\[(\d+)(?:\-\d+)?\]\s*(.*?)\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)/;
push(@prefix_params, 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_ms', 't_host', 't_ident', 't_pid', 't_session_line',
't_logprefix', 't_loglevel', 't_query');
$other_syslog_line = qr/^(...)\s+(\d+)\s(\d+):(\d+):(\d+)(?:\.\d+)?(?:\s[^\s]+)?\s([^\s]+)\s([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?\s\[(\d+)(?:\-\d+)?\]\s*(.*)/;
}
elsif ($fmt eq 'syslog2')
{
$fmt = 'syslog';
$compiled_prefix =
qr/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2})(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?(?:\s\[(\d+)(?:\-\d+)?\])?\s*(.*?)\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)/;
push(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_host', 't_ident', 't_pid', 't_session_line',
't_logprefix', 't_loglevel', 't_query');
$other_syslog_line = qr/^(\d+-\d+)-(\d+)T(\d+):(\d+):(\d+)(?:.[^\s]+)?\s([^\s]+)\s(?:[^\s]+\s)?(?:[^\s]+\s)?([^\s\[]+)\[(\d+)\]:(?:\s\[[^\]]+\])?(?:\s\[(\d+)(?:\-\d+)?\])?\s*(.*)/;
}
elsif ($fmt eq 'logplex')
{
# The output format of the heroku pg logs is as follows: timestamp app[dyno]: message
$compiled_prefix =
qr/^(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)(\.\d+)?[+\-]\d{2}:\d{2}\s+(?:[^\s]+)?\s*app\[postgres\.(\d+)\][:]?\s+\[([^\]]+)\]\s+\[\d+\-\d+\]\s+(.*?)\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)/;
unshift(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_ms', 't_pid', 't_dbname');
push(@prefix_params, 't_logprefix', 't_loglevel', 't_query');
$other_syslog_line = qr/^(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)(?:\.\d+)?[+\-]\d{2}:\d{2}\s+(?:[^\s]+)?\s*app\[(postgres)\.(\d+)\][:]?\s+\[([^\]]+)\]\s+\[\d+\-\d+\]\s*(.*)/;
}
elsif ($fmt eq 'rds')
{
# The output format of the RDS pg logs is as follows: %t:%r:%u@%d:[%p]: message
# With Cloudwatch it is prefixed with another timestamp
$compiled_prefix =
qr/^(?:\d+-\d+-\d+T\d+:\d+:\d+\.\d+Z)?\s*(\d+)-(\d+)-(\d+) (\d+):(\d+):(\d+)\s*[^:]*:([^:]*):([^\@]*)\@([^:]*):\[(\d+)\]:(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)/;
unshift(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_client', 't_dbuser', 't_dbname', 't_pid', 't_loglevel', 't_query');
}
elsif ($fmt eq 'redshift')
{
# Look at format of the AWS redshift pg logs, for example:
# '2020-03-07T16:09:43Z UTC [ db=dev user=rdsdb pid=16929 userid=1 xid=7382 ]'
$compiled_prefix =
qr/^'(\d+)-(\d+)-(\d+)T(\d+):(\d+):(\d+)Z [^\s]+ \[ db=(.*?) user=(.*?) pid=(\d+) userid=\d+ xid=(?:.*?) \]' (LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(.*)/;
unshift(@prefix_params, 't_year', 't_month', 't_day', 't_hour', 't_min', 't_sec', 't_dbname', 't_dbuser', 't_pid', 't_loglevel', 't_query');
}
elsif ($fmt eq 'stderr')
{
$compiled_prefix =
qr/^(\d{10}\.\d{3}|\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2})(\.\d+)?(?: [A-Z\+\-\d]{3,6})?\s\[([0-9a-f\.]+)\][:]*\s(?:\[\d+\-\d+\])?\s*(.*?)\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)/;
push(@prefix_params, 't_timestamp', 't_ms', 't_pid', 't_logprefix', 't_loglevel', 't_query');
}
elsif ($fmt eq 'default')
{
$fmt = 'stderr';
$compiled_prefix =
qr/^(\d{10}\.\d{3}|\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2})(\.\d+)?(?: [A-Z\+\-\d]{3,6})?\s\[([0-9a-f\.]+)\][:]*\s(.*?)\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)/;
push(@prefix_params, 't_timestamp', 't_ms', 't_pid', 't_logprefix', 't_loglevel', 't_query');
}
return $fmt;
}
sub check_regex
{
my ($pattern, $varname) = @_;
eval {m/$pattern/i;};
if ($@) {
localdie("FATAL: '$varname' invalid regex '$pattern', $!\n");
}
}
sub build_incremental_reports
{
my @build_directories = @_;
my $destdir = $html_outdir || $outdir;
my %weeks_directories = ();
foreach my $bpath (sort @build_directories)
{
my $binpath = '';
$binpath = $1 if ($bpath =~ s/^(.*\/)(\d+\-\d+\-\d+)$/$2/);
&logmsg('DEBUG', "Building incremental report for " . $bpath);
$incr_date = $bpath;
$last_incr_date = $bpath;
# Set the path to binary files
$bpath =~ s/\-/\//g;
# Get the week number following the date
$incr_date =~ /^(\d+)-(\d+)\-(\d+)$/;
my $wn = &get_week_number($1, $2, $3);
if (!$noweekreport)
{
if ($rebuild || !exists $weeks_directories{$wn})
{
$weeks_directories{$wn}{dir} = "$1-$2";
$weeks_directories{$wn}{prefix} = $binpath if ($binpath);
}
}
# First clear previous stored statistics
&init_stats_vars();
# Load all data gathered by all the different processes
$destdir = $binpath || $outdir;
if (opendir(DIR, "$destdir/$bpath"))
{
my @mfiles = grep { !/^\./ && ($_ =~ /\.bin$/) } readdir(DIR);
closedir DIR;
foreach my $f (@mfiles)
{
my $fht = new IO::File;
$fht->open("< $destdir/$bpath/$f") or localdie("FATAL: can't open file $destdir/$bpath/$f, $!\n");
load_stats($fht);
$fht->close();
}
}
$destdir = $html_outdir || $outdir;
foreach my $db (sort keys %DBLIST)
{
#next if ($#dbname >= 0 and !grep(/^$db$/i, @dbname));
my $tmp_dir = "$destdir/$db";
$tmp_dir = $destdir if (!$report_per_database);
&logmsg('LOG', "Ok, generating HTML daily report into $tmp_dir/$bpath/...");
# set path and create subdirectories
mkdir("$tmp_dir") if (!-d "$tmp_dir");
if ($bpath =~ m#^(\d+)/(\d+)/(\d+)#)
{
mkdir("$tmp_dir/$1") if (!-d "$tmp_dir/$1");
mkdir("$tmp_dir/$1/$2") if (!-d "$tmp_dir/$1/$2");
mkdir("$tmp_dir/$1/$2/$3") if (!-d "$tmp_dir/$1/$2/$3");
}
else
{
&logmsg('ERROR', "invalid path: $bpath, can not create subdirectories.");
}
$fh = new IO::File ">$tmp_dir/$bpath/$current_out_file";
if (not defined $fh) {
localdie("FATAL: can't write to $tmp_dir/$bpath/$current_out_file, $!\n");
}
&dump_as_html('../../..', $db);
$fh->close;
}
}
# Build a report per week
foreach my $wn (sort { $a <=> $b } keys %weeks_directories)
{
&init_stats_vars();
# Get all days of the current week
my $getwnb = $wn;
$getwnb-- if (!$iso_week_number);
my @wdays = &get_wdays_per_month($getwnb, $weeks_directories{$wn}{dir});
my $binpath = '';
$binpath = $weeks_directories{$wn}{prefix} if (defined $weeks_directories{$wn}{prefix});
my $wdir = '';
# Load data per day
foreach my $bpath (@wdays)
{
$incr_date = $bpath;
$bpath =~ s/\-/\//g;
$incr_date =~ /^(\d+)\-(\d+)\-(\d+)$/;
$wdir = "$1/week-$wn";
$destdir = $binpath || $outdir;
# Load all data gathered by all the differents processes
if (-e "$destdir/$bpath")
{
unless(opendir(DIR, "$destdir/$bpath")) {
localdie("FATAL: can't opendir $destdir/$bpath: $!\n");
}
my @mfiles = grep { !/^\./ && ($_ =~ /\.bin$/) } readdir(DIR);
closedir DIR;
foreach my $f (@mfiles)
{
my $fht = new IO::File;
$fht->open("< $destdir/$bpath/$f") or localdie("FATAL: can't open file $destdir/$bpath/$f, $!\n");
load_stats($fht);
$fht->close();
}
}
}
$destdir = $html_outdir || $outdir;
foreach my $db (sort keys %DBLIST)
{
#next if ($#dbname >= 0 and !grep(/^$db$/i, @dbname));
my $tmp_dir = "$destdir/$db";
$tmp_dir = $destdir if (!$report_per_database);
&logmsg('LOG', "Ok, generating HTML weekly report into $tmp_dir/$wdir/...");
mkdir("$tmp_dir") if (!-d "$tmp_dir");
my $path = $tmp_dir;
foreach my $d (split('/', $wdir))
{
mkdir("$path/$d") if (!-d "$path/$d");
$path .= "/$d";
}
$fh = new IO::File ">$tmp_dir/$wdir/$current_out_file";
if (not defined $fh) {
localdie("FATAL: can't write to $tmp_dir/$wdir/$current_out_file, $!\n");
}
&dump_as_html('../..', $db);
$fh->close;
}
}
# Generate global index to access incremental reports
&build_global_index();
}
sub build_month_reports
{
my ($month_path, @build_directories) = @_;
# First clear previous stored statistics
&init_stats_vars();
foreach my $bpath (sort @build_directories) {
$incr_date = $bpath;
$last_incr_date = $bpath;
# Set the path to binary files
$bpath =~ s/\-/\//g;
# Get the week number following the date
$incr_date =~ /^(\d+)-(\d+)\-(\d+)$/;
&logmsg('DEBUG', "reading month statistics from $outdir/$bpath");
# Load all data gathered by all the different processes
unless(opendir(DIR, "$outdir/$bpath")) {
localdie("FATAL: can't opendir $outdir/$bpath: $!\n");
}
my @mfiles = grep { !/^\./ && ($_ =~ /\.bin$/) } readdir(DIR);
closedir DIR;
foreach my $f (@mfiles) {
my $fht = new IO::File;
$fht->open("< $outdir/$bpath/$f") or localdie("FATAL: can't open file $outdir/$bpath/$f, $!\n");
load_stats($fht);
$fht->close();
}
}
my $dest_dir = $html_outdir || $outdir;
foreach my $db (sort keys %DBLIST)
{
my $tmp_dir = "$dest_dir/$db";
$tmp_dir = $dest_dir if (!$report_per_database);
&logmsg('LOG', "Ok, generating HTML monthly report into $tmp_dir/$month_path/index.html");
mkdir("$tmp_dir") if (!-d "$tmp_dir");
my $path = $tmp_dir;
foreach my $d (split('/', $month_path)) {
mkdir("$path/$d") if (!-d "$path/$d");
$path .= "/$d";
}
$fh = new IO::File ">$tmp_dir/$month_path/index.html";
if (not defined $fh) {
localdie("FATAL: can't write to $tmp_dir/$month_path/index.html, $!\n");
}
&dump_as_html('../..', $db);
$fh->close;
}
# Generate global index to access incremental reports
&build_global_index();
}
sub build_day_reports
{
my ($day_path, @build_directories) = @_;
# First clear previous stored statistics
&init_stats_vars();
foreach my $bpath (sort @build_directories) {
$incr_date = $bpath;
$last_incr_date = $bpath;
# Set the path to binary files
$bpath =~ s/\-/\//g;
# Get the week number following the date
$incr_date =~ /^(\d+)-(\d+)\-(\d+)$/;
&logmsg('DEBUG', "reading month statistics from $outdir/$bpath");
# Load all data gathered by all the different processes
unless(opendir(DIR, "$outdir/$bpath")) {
localdie("FATAL: can't opendir $outdir/$bpath: $!\n");
}
my @mfiles = grep { !/^\./ && ($_ =~ /\.bin$/) } readdir(DIR);
closedir DIR;
foreach my $f (@mfiles) {
my $fht = new IO::File;
$fht->open("< $outdir/$bpath/$f") or localdie("FATAL: can't open file $outdir/$bpath/$f, $!\n");
load_stats($fht);
$fht->close();
}
}
my $dest_dir = $html_outdir || $outdir;
foreach my $db (sort keys %DBLIST)
{
my $tmp_dir = "$dest_dir/$db";
$tmp_dir = $dest_dir if (!$report_per_database);
&logmsg('LOG', "Ok, generating HTML daily report into $tmp_dir/$day_path/index.html");
mkdir("$tmp_dir") if (!-d "$tmp_dir");
my $path = $tmp_dir;
foreach my $d (split('/', $day_path))
{
mkdir("$path/$d") if (!-d "$path/$d");
$path .= "/$d";
}
$fh = new IO::File ">$tmp_dir/$day_path/index.html";
if (not defined $fh) {
localdie("FATAL: can't write to $tmp_dir/$day_path/index.html, $!\n");
}
&dump_as_html('../..', $db);
$fh->close;
}
# Generate global index to access incremental reports
&build_global_index();
}
sub build_global_index
{
&logmsg('LOG', "Ok, generating global index to access incremental reports...");
my $dest_dir = $html_outdir || $outdir;
# Get database directories
unless(opendir(DIR, "$dest_dir")) {
localdie("FATAL: can't opendir $dest_dir: $!\n");
}
my @dbs = grep { !/^\./ && !/^\d{4}$/ && -d "$dest_dir/$_" } readdir(DIR);
closedir DIR;
@dbs = ($DBALL) if (!$report_per_database);
foreach my $db (@dbs)
{
#next if ($#dbname >= 0 and !grep(/^$db$/i, @dbname));
my $tmp_dir = "$dest_dir/$db";
$tmp_dir = $dest_dir if (!$report_per_database);
&logmsg('DEBUG', "writing global index into $tmp_dir/index.html");
$fh = new IO::File ">$tmp_dir/index.html";
if (not defined $fh) {
localdie("FATAL: can't write to $tmp_dir/index.html, $!\n");
}
my $date = localtime(time);
my @tmpjscode = @jscode;
my $path_prefix = '.';
$path_prefix = '..' if ($report_per_database);
for (my $i = 0; $i <= $#tmpjscode; $i++) {
$tmpjscode[$i] =~ s/EDIT_URI/$path_prefix/;
}
my $local_title = 'Global Index on incremental reports';
if ($report_title) {
$local_title = 'Global Index - ' . $report_title;
}
print $fh qq{
pgBadger :: $local_title
@tmpjscode
};
}
sub print_vacuum_per_table
{
my $curdb = shift;
# VACUUM stats per table
my $total_count = 0;
my $total_idxscan = 0;
my $total_hits = 0;
my $total_misses = 0;
my $total_dirtied = 0;
my $total_skippins = 0;
my $total_skipfrozen = 0;
my $total_records = 0;
my $total_full_page = 0;
my $total_bytes = 0;
my $total_pages_frozen = 0;
my $total_tuples_frozen = 0;
my $vacuum_info = '';
my @main_vacuum = ('unknown',0);
foreach my $t (sort {
$autovacuum_info{$curdb}{tables}{$b}{vacuums} <=> $autovacuum_info{$curdb}{tables}{$a}{vacuums}
} keys %{$autovacuum_info{$curdb}{tables}})
{
$vacuum_info .= "
};
delete $drawn_graphs{queriesbyhost_graph};
}
sub display_plan
{
my ($id, $plan) = @_;
# Only TEXT format plan can be sent to Depesz site.
if ($plan !~ /Node Type:|"Node Type":|Node-Type/s) {
return "
};
&print_overall_statistics($curdb);
}
if (!$disable_hourly && !$pgbouncer_only)
{
# Build graphs based on hourly stat
&compute_query_graphs($curdb);
# Show global SQL traffic
&print_sql_traffic($curdb);
# Show hourly statistics
&print_general_activity($curdb);
}
if (!$disable_connection && !$pgbouncer_only)
{
print $fh qq{
Connections
};
# Draw connections information
&print_established_connection($curdb) if (!$disable_hourly);
# Show per database/user connections
&print_database_connection($curdb);
# Show per user connections
&print_user_connection($curdb);
# Show per client ip connections
&print_host_connection($curdb);
}
# Show session per database statistics
if (!$disable_session && !$pgbouncer_only)
{
print $fh qq{
Sessions
};
# Show number of simultaneous sessions
&print_simultaneous_session($curdb);
# Show histogram for session times
&print_histogram_session_times($curdb);
# Show per database sessions
&print_database_session($curdb);
# Show per user sessions
&print_user_session($curdb);
# Show per host sessions
&print_host_session($curdb);
# Show per application sessions
&print_app_session($curdb);
}
# Display checkpoint and temporary files report
if (!$disable_checkpoint && !$pgbouncer_only && (!$report_per_database || $curdb eq $DBALL)) {
print $fh qq{
};
# Show temporary files detailed information
&print_temporary_file($curdb);
# Show information about queries generating temporary files
&print_tempfile_report($curdb);
}
if (!$disable_autovacuum && !$pgbouncer_only)
{
print $fh qq{
};
# Show detailed vacuum/analyse information
&print_vacuum($curdb);
}
if (!$disable_lock && !$pgbouncer_only)
{
print $fh qq{
};
# Lock stats per type
&print_lock_type($curdb);
# Show lock wait detailed information
&print_lock_queries_report($curdb);
}
if (!$disable_query && !$pgbouncer_only)
{
print $fh qq{
};
# INSERT/DELETE/UPDATE/SELECT repartition
if (!$disable_type)
{
&print_query_type($curdb);
# Show requests per database
&print_query_per_database($curdb);
# Show requests per user
&print_query_per_user($curdb);
# Show requests per host
&print_query_per_host($curdb);
# Show requests per application
&print_query_per_application($curdb);
;
# Show cancelled queries detailed information
&print_cancelled_queries($curdb);
# Show information about cancelled queries
&print_cancelled_report($curdb);
}
print $fh qq{
};
# Show histogram for query times
&print_histogram_query_times($curdb);
# Show top information
&print_slowest_individual_queries($curdb);
# Show queries that took up the most time
&print_time_consuming($curdb);
# Show most frequent queries
&print_most_frequent($curdb);
# Print normalized slowest queries
&print_slowest_queries($curdb);
# Show prepare that took up the most time
&print_prepare_consuming($curdb);
# Show bind that took up the most time
&print_bind_consuming($curdb);
}
# Show pgbouncer sessions and connections statistics
if (exists $pgb_overall_stat{peak} && (!$report_per_database || $curdb eq $DBALL))
{
# Build pgbouncer graph based on hourly stats
&compute_pgbouncer_graphs();
my $active = '';
$active = ' active-slide' if ($pgbouncer_only);
print $fh qq{
pgBouncer
};
# Draw pgbouncer own statistics
&print_pgbouncer_stats() if (!$disable_hourly);
# Draw connections information
&print_established_pgb_connection() if (!$disable_hourly);
# Show per database/user connections
&print_database_pgb_connection();
# Show per user connections
&print_user_pgb_connection();
# Show per client ip connections
&print_host_pgb_connection();
# Show number of simultaneous sessions
&print_simultaneous_pgb_session();
# Show histogram for session times
&print_histogram_pgb_session_times();
# Show per database sessions
&print_database_pgb_session();
# Show per user sessions
&print_user_pgb_session();
# Show per host sessions
&print_host_pgb_session();
# Show most used reserved pool
&show_pgb_reserved_pool();
# Show Most Frequent Errors/Events
&show_pgb_error_as_html();
}
}
# Show errors report
if (!$disable_error)
{
if (!$error_only)
{
print $fh qq{
};
} else {
print $fh qq{
};
}
# Show log level distribution
&print_log_level($curdb);
# Show error code distribution
&print_error_code($curdb) if (scalar keys %errors_code > 0);
# Show Most Frequent Errors/Events
&show_error_as_html($curdb);
}
# Dump the html footer
&html_footer($curdb);
}
sub url_escape
{
my $toencode = shift;
return if (!$toencode);
utf8::encode($toencode) if (($] >= 5.008) && utf8::is_utf8($toencode));
if (EBCDIC) {
$toencode =~ s/([^a-zA-Z0-9_.~-])/uc sprintf("%%%02x",$E2A[ord($1)])/eg;
} else {
$toencode =~ s/([^a-zA-Z0-9_.~-])/uc sprintf("%%%02x",ord($1))/eg;
}
return $toencode;
}
sub escape_html
{
$_[0] =~ s/<([\/a-zA-Z][\s\>]*)/\<$1/sg;
return $_[0];
}
sub print_log_level
{
my $curdb = shift;
my %infos = ();
# Show log types
my $total_logs = 0;
foreach my $d (sort keys %{$logs_type{$curdb}}) {
$total_logs += $logs_type{$curdb}{$d};
}
my $logtype_info = '';
foreach my $d (sort keys %{$logs_type{$curdb}}) {
next if (!$logs_type{$curdb}{$d});
$logtype_info .= "
};
}
my $datadef = '';
foreach my $k (sort keys %data) {
$datadef .= "['$k', $data{$k}],";
}
$datadef =~ s/,$//;
return <
EOF
}
sub jqplot_histograph
{
my ($buttonid, $divid, $data1, $data2, $legend1, $legend2) = @_;
if (!$data1) {
return qq{
NO DATASET
};
}
$legend1 ||= 'Queries';
my $y2decl = '';
my $y2vals = '';
if ($data2) {
$legend2 ||= 'Avg. duration';
$y2decl = "var lines_${buttonid} = [$data2];";
$y2vals = ", lines_${buttonid}";
}
my $title = '';
return <
EOF
}
sub jqplot_duration_histograph
{
my ($buttonid, $divid, $legend, $range, %data) = @_;
if (scalar keys %data == 0) {
return qq{
NO DATASET
};
}
$legend ||= 'Queries';
my $bars = '';
for (my $i = 1; $i <= $#{$range}; $i++) {
my $k = "$range->[$i-1]-$range->[$i]ms";
my $lbl = "'" . &convert_time($range->[$i-1]) . '-' . &convert_time($range->[$i]) . "'";
$bars .= "[ $lbl, $data{$k}],";
}
my $k = "> $range->[-1]ms";
$bars .= "[ '> " . &convert_time($range->[-1]) . "', $data{$k}]";
my $title = '';
return <
EOF
}
sub build_log_line_prefix_regex
{
my $llp = shift;
my %regex_map = (
'%a' => [('t_appname', '(.*?)')], # application name
'%u' => [('t_dbuser', '([0-9a-zA-Z\_\[\]\-\.]*)')], # user name
'%d' => [('t_dbname', '([0-9a-zA-Z\_\[\]\-\.]*)')], # database name
'%r' => [('t_hostport', '([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?[\(\d\)]*')], # remote host and port
'%h' => [('t_client', '([a-zA-Z0-9\-\.]+|\[local\]|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|[0-9a-fA-F:]+)?')], # remote host
'%p' => [('t_pid', '(\d+)')], # process ID
'%Q' => [('t_queryid', '([\-]*\d+)')], # Query ID
'%n' => [('t_epoch', '(\d{10}\.\d{3})')], # timestamp as Unix epoch
'%t' => [('t_timestamp', '(\d{4}-\d{2}-\d{2}[\sT]\d{2}:\d{2}:\d{2})[Z]*(?:[ \+\-][A-Z\+\-\d]{3,6})?')], # timestamp without milliseconds
'%m' => [('t_mtimestamp', '(\d{4}-\d{2}-\d{2}[\sT]\d{2}:\d{2}:\d{2}\.\d+)(?:[ \+\-][A-Z\+\-\d]{3,6})?')], # timestamp with milliseconds
'%l' => [('t_session_line', '(\d+)')], # session line number
'%s' => [('t_session_timestamp', '(\d{4}-\d{2}-\d{2}[\sT]\d{2}):\d{2}:\d{2}(?:[ \+\-][A-Z\+\-\d]{3,6})?')], # session start timestamp
'%c' => [('t_session_id', '([0-9a-f\.]*)')], # session ID
'%v' => [('t_virtual_xid', '([0-9a-f\.\/]*)')], # virtual transaction ID
'%x' => [('t_xid', '([0-9a-f\.\/]*)')], # transaction ID
'%i' => [('t_command', '([0-9a-zA-Z\.\-\_\s]*)')], # command tag
'%e' => [('t_sqlstate', '([0-9a-zA-Z]+)')], # SQL state
'%b' => [('t_backend_type', '(.*?)')], # backend type
);
my $param_list = [];
$llp =~ s/([\[\]\|\(\)\{\}])/\\$1/g;
$llp =~ s/\%l([^\d])\d+/\%l$1\\d\+/;
my $q_prefix = '';
if ($llp =~ s/(.*)\%q(.*)/$1$2/) {
$q_prefix = $1;
}
while ($llp =~ s/(\%[audrhpQntmlscvxieb])/$regex_map{"$1"}->[1]/) {
push(@$param_list, $regex_map{"$1"}->[0]);
}
my $q_prefix_param = [];
if ($q_prefix) {
while ($q_prefix =~ s/(\%[audrhpQntmlscvxieb])/$regex_map{"$1"}->[1]/) {
push(@$q_prefix_param, $regex_map{"$1"}->[0]);
}
push(@$q_prefix_param, 't_loglevel', 't_query');
}
# replace %% by a single %
$llp =~ s/\%\%/\%/g;
$q_prefix =~ s/\%\%/\%/g;
$q_prefix = qr/$q_prefix\s*(LOG|WARNING|ERROR|FATAL|PANIC|DETAIL|STATEMENT|HINT|CONTEXT|LOCATION):\s+(?:[0-9A-Z]{5}:\s+)?(.*)/;
# t_session_id (%c) can naturaly replace pid as unique session id
# when it is given in log_line_prefix and pid is not present.
$use_sessionid_as_pid = 1 if ( grep(/t_session_id/, @$param_list) && !grep(/t_pid/, @$param_list) );
# Check regex in log line prefix from command line
&check_regex($llp, '--prefix');
return (
'llp' => $llp, 'param_list' => $param_list,
'q_prefix' => $q_prefix, 'q_param_list' => $q_prefix_param
);
}
####
# get_file_size: in scalar context returns the size of the file,
# in list context returns the size of the file and a boolean
# to indicate if the file is compressed.
# The total size returnied is set to -1 when pgbadger can not
# determine the file size (remote file, bzip2 conmpressed file
# and privilege issue). Outside these cases if we can't get size
# of a remote file pgbadger exit with a fatal error.
####
sub get_file_size
{
my $logf = shift;
# Remove log format from log file if any
$logf =~ s/[\r\n]+//sg;
$logf =~ s/:(stderr|csv|syslog|pgbouncer|jsonlog|logplex|rds|redshift)\d*$//i;
my $http_download = ($logf =~ /^(http[s]*:|[s]*ftp:)/i) ? 1 : 0;
my $ssh_download = ($logf =~ /^ssh:/i) ? 1 : 0;
my $iscompressed = ($logf =~ $compress_extensions) ? 1 : 0;
# Get file size
my $totalsize = 0;
# Log entries extracted from journalctl command are of indetermined size
if ( $journalctl_cmd && ($logf =~ m/\Q$journalctl_cmd\E/) )
{
$totalsize = -1;
}
# Log entries extracted from cust command are of indetermined size
elsif ( $log_command && ($logf =~ m/\Q$log_command\E/) )
{
$totalsize = -1;
}
# Same from stdin
elsif ($logf eq '-')
{
$totalsize = -1;
}
# Regular local files can be "stated" if they are not compressed
elsif (!$remote_host && !$http_download && !$ssh_download && !$iscompressed)
{
eval {
$totalsize = (stat($logf))[7];
};
$totalsize = -1 if ($@);
}
# For uncompressed files try to get the size following the remote access protocol
elsif (!$iscompressed)
{
# Use curl to try to get remote file size if it is not compressed
if ($http_download) {
&logmsg('DEBUG', "Looking for file size using command: $curl_command --head \"$logf\" | grep \"Content-Length:\" | awk '{print \$2}'");
$totalsize = `$curl_command --head "$logf" | grep "Content-Length:" | awk '{print \$2}'`;
chomp($totalsize);
localdie("FATAL: can't get size of remote file, please check what's going wrong with command: $curl_command --head \"$logf\" | grep \"Content-Length:\"\n") if ($totalsize eq '');
} elsif ($ssh_download && $logf =~ m#^ssh:\/\/([^\/]+)/(.*)#i) {
my $host_info = $1;
my $file = $2;
$file =~ s/[\r\n]+//sg;
$file =~ s/:(stderr|csv|syslog|pgbouncer|jsonlog|logplex|rds|redshift)\d*$//i;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
&logmsg('DEBUG', "Looking for file size using command: $ssh $host_info \"ls -l '$file'\" | awk '{print \$5}'");
$totalsize = `$ssh $host_info "ls -l '$file'" | awk '{print \$5}'`;
chomp($totalsize);
localdie("FATAL: can't get size of remote file, please check what's going wrong with command: $ssh $host_info \"ls -l '$file'\"\n") if ($totalsize eq '');
} elsif ($remote_host) {
&logmsg('DEBUG', "Looking for file size using command: $remote_command \"ls -l '$logf'\" | awk '{print \$5}'");
$totalsize = `$remote_command "ls -l '$logf'" | awk '{print \$5}'`;
chomp($totalsize);
localdie("FATAL: can't get size of remote file, please check what's going wrong with command: $ssh_command \"ls -l '$logf'\"\n") if ($totalsize eq '');
}
chomp($totalsize);
&logmsg('DEBUG', "Remote file size: $totalsize");
}
# Real size of the file is unknown with compressed file, try to find
# size using uncompress command (bz2 does not report real size)
elsif (!$http_download && $logf =~ $compress_extensions)
{
my $cmd_file_size = $gzip_uncompress_size;
if ($logf =~ /\.zip$/i) {
$cmd_file_size = $zip_uncompress_size;
} elsif ($logf =~ /\.xz$/i) {
$cmd_file_size = $xz_uncompress_size;
} elsif ($logf =~ /\.lz4$/i) {
$cmd_file_size = $lz4_uncompress_size;
} elsif ($logf =~ /\.zst$/i) {
$cmd_file_size = $zstd_uncompress_size;
} elsif ($logf =~ /\.bz2$/i) {
$cmd_file_size = "ls -l '%f' | awk '{print \$5}'";
}
if (!$remote_host && !$http_download && !$ssh_download) {
$cmd_file_size =~ s/\%f/$logf/g;
&logmsg('DEBUG', "Looking for file size using command: $cmd_file_size");
$totalsize = `$cmd_file_size`;
chomp($totalsize);
if ($totalsize !~ /\d+/) {
if ($logf =~ /\.lz4$/i) {
# lz4 archive must be compressed with --content-size option to determine size. If not $totalsize is '-'
&logmsg('DEBUG', "Can't determine lz4 file size. Maybe file compressed without --content-size option ?");
} else {
&logmsg('DEBUG', "Can't determine uncompressed file size. Guess with compressed size * $XZ_FACTOR");
}
# Thus we use a hack to determine size
$cmd_file_size = "ls -l '%f' | awk '{print \$5}'";
$cmd_file_size =~ s/\%f/$logf/g;
$totalsize = `$cmd_file_size`;
$totalsize *= $XZ_FACTOR;
}
} elsif ($ssh_download && $logf =~ m#^ssh://([^\/]+)/(.*)#i) {
my $host_info = $1;
my $file = $2;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
$cmd_file_size =~ s/\%f/$file/g;
$cmd_file_size =~ s/\$/\\\$/g;
&logmsg('DEBUG', "Looking for file size using command: $ssh $host_info \"$cmd_file_size\"");
$totalsize = `$ssh $host_info \"$cmd_file_size\"`;
} else {
$cmd_file_size =~ s/\%f/$logf/g;
$cmd_file_size =~ s/\$/\\\$/g;
&logmsg('DEBUG', "Looking for remote file size using command: $remote_command \"$cmd_file_size\"");
$totalsize = `$remote_command \"$cmd_file_size\"`;
}
chomp($totalsize);
# For bz2 compressed file we don't know the real size
if ($logf =~ /\.bz2$/i) {
# apply deflate estimation factor
$totalsize *= $BZ_FACTOR;
}
}
# Bzip2 and remote download compressed files can't report real size, get compressed
# file size and estimate the real size by using bzip2, gzip and xz factors.
elsif ($http_download)
{
&logmsg('DEBUG', "Looking for file size using command: $curl_command --head \"$logf\" | grep \"Content-Length:\" | awk '{print \$2}'");
$totalsize = `$curl_command --head \"$logf\" | grep "Content-Length:" | awk '{print \$2}'`;
chomp($totalsize);
localdie("FATAL: can't get size of remote file, please check what's going wrong with command: $curl_command --head \"$logf\" | grep \"Content-Length:\"\n") if ($totalsize eq '');
&logmsg('DEBUG', "With http access size real size of a compressed file is unknown but use Content-Length wirth compressed side.");
# For all compressed file we don't know the
# real size apply deflate estimation factor
if ($logf =~ /\.bz2$/i)
{
# apply deflate estimation factor
$totalsize *= $BZ_FACTOR;
}
elsif ($logf =~ /\.(zip|gz)$/i)
{
$totalsize *= $GZ_FACTOR;
}
elsif ($logf =~ /\.(xz|lz4|zst)$/i)
{
$totalsize *= $XZ_FACTOR;
}
}
return $totalsize;
}
sub uncompress_commands
{
my $file = shift();
return if (!$file);
my $uncompress = $zcat;
my $sample_cmd = 'zgrep';
if (($file =~ /\.bz2/i) && ($zcat =~ /^$zcat_cmd$/))
{
$uncompress = $bzcat;
$sample_cmd = 'bzgrep';
} elsif (($file =~ /\.zip/i) && ($zcat =~ /^$zcat_cmd$/)) {
$uncompress = $ucat;
} elsif (($file =~ /\.lz4/i) && ($zcat =~ /^$zcat_cmd$/)) {
$uncompress = $lz4cat;
} elsif (($file =~ /\.zst/i) && ($zcat =~ /^$zcat_cmd$/)) {
$uncompress = $zstdcat;
}
elsif (($file =~ /\.xz/i) && ($zcat =~ /^$zcat_cmd$/))
{
$uncompress = $xzcat;
$sample_cmd = 'xzgrep';
}
return ($uncompress, $sample_cmd);
}
sub get_log_file
{
my ($logf, $totalsize, $sample_only) = @_;
my $lfile = undef;
$LOG_EOL_TYPE = 'LF';
return $lfile if ($totalsize == 0);
$logf =~ s/:(stderr|csv|syslog|pgbouncer|jsonlog|logplex|rds|redshift)\d*$//i;
my $http_download = ($logf =~ /^(http[s]*:|[s]*ftp:)/i) ? 1 : 0;
my $ssh_download = ($logf =~ /^ssh:/i) ? 1 : 0;
my $iscompressed = ($logf =~ $compress_extensions) ? 1 : 0;
chomp($logf);
# Open and return a file handle to parse the log
if ( $journalctl_cmd && ($logf =~ m/\Q$journalctl_cmd\E/) )
{
# For journalctl command we need to use a pipe as file handle
if (!$remote_host) {
open($lfile, '-|', $logf) || localdie("FATAL: cannot read output of command: $logf. $!\n");
}
else
{
if (!$sample_only)
{
&logmsg('DEBUG', "Retrieving log entries using command: $remote_command \"$logf\" |");
# Open a pipe to remote journalctl program
open($lfile, '-|', "$remote_command \"$logf\"") || localdie("FATAL: cannot read from pipe to $remote_command \"$logf\". $!\n");
}
else
{
&logmsg('DEBUG', "Retrieving log entries using command: $remote_command \"'$logf' -n 100\" |");
# Open a pipe to remote journalctl program
open($lfile, '-|', "$remote_command \"'$logf' -n 100\"") || localdie("FATAL: cannot read from pipe to $remote_command \"'$logf' -n 100\". $!\n");
}
}
}
elsif ( $log_command && ($logf =~ m/\Q$log_command\E/) )
{
# For custom command we need to use a pipe as file handle
if (!$remote_host) {
open($lfile, '-|', $logf) || localdie("FATAL: cannot read output of command: $logf. $!\n");
}
else
{
if (!$sample_only)
{
&logmsg('DEBUG', "Retrieving log entries using command: $remote_command \"$logf\" |");
# Open a pipe to remote custom program
open($lfile, '-|', "$remote_command \"$logf\"") || localdie("FATAL: cannot read from pipe to $remote_command \"$logf\". $!\n");
}
else
{
&logmsg('DEBUG', "Retrieving log entries using command: $remote_command \"'$logf' -n 100\" |");
# Open a pipe to remote custom program
open($lfile, '-|', "$remote_command \"'$logf' -n 100\"") || localdie("FATAL: cannot read from pipe to $remote_command \"'$logf' -n 100\". $!\n");
}
}
}
elsif (!$iscompressed)
{
if (!$remote_host && !$http_download && !$ssh_download)
{
if ($logf ne '-')
{
$LOG_EOL_TYPE = &get_eol_type($logf);
open($lfile, '<', $logf) || localdie("FATAL: cannot read log file $logf. $!\n");
}
else
{
$lfile = *STDIN;
}
}
else
{
if (!$sample_only)
{
if (!$http_download)
{
if ($ssh_download && $logf =~ m#^ssh://([^\/]+)/(.*)#i)
{
my $host_info = $1;
my $file = $2;
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
my $ssh = $ssh_command || 'ssh';
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
&logmsg('DEBUG', "Retrieving log entries using command: $ssh $host_info \"cat '$file'\" |");
# Open a pipe to cat program
open($lfile, '-|', "$ssh $host_info \"cat '$file'\"") || localdie("FATAL: cannot read from pipe to $ssh $host_info \"cat '$file'\". $!\n");
}
else
{
&logmsg('DEBUG', "Retrieving log entries using command: $remote_command \" cat '$logf'\" |");
# Open a pipe to cat program
open($lfile, '-|', "$remote_command \"cat '$logf'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"cat '$logf'\". $!\n");
}
}
else
{
&logmsg('DEBUG', "Retrieving log entries using command: $curl_command --data-binary \"$logf\" |");
# Open a pipe to GET program
open($lfile, '-|', "$curl_command \"$logf\"") || localdie("FATAL: cannot read from pipe to $curl_command --data-binary \"$logf\". $!\n");
}
}
elsif (!$http_download)
{
if ($ssh_download && $logf =~ m#^ssh://([^\/]+)/(.*)#i)
{
my $host_info = $1;
my $file = $2;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
&logmsg('DEBUG', "Retrieving log sample using command: $ssh $host_info \"tail -n 100 '$file'\" |");
# Open a pipe to cat program
open($lfile, '-|', "$ssh $host_info \"tail -n 100 '$file'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"tail -n 100 '$logf'\". $!\n");
}
else
{
&logmsg('DEBUG', "Retrieving log sample using command: $remote_command \"tail -n 100 '$logf'\" |");
# Open a pipe to cat program
open($lfile, '-|', "$remote_command \"tail -n 100 '$logf'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"tail -n 100 '$logf'\". $!\n");
}
}
else
{
&logmsg('DEBUG', "Retrieving log sample using command: $curl_command --data-binary --max-filesize 102400 \"$logf\" |");
# Open a pipe to GET program
open($lfile, '-|', "$curl_command --data-binary --max-filesize 102400 \"$logf\"") || localdie("FATAL: cannot read from pipe to $curl_command --data-binary --max-filesize 102400 \"$logf\". $!\n");
}
}
}
else
{
my ($uncompress, $sample_cmd) = &uncompress_commands($logf);
if (!$remote_host && !$http_download && !$ssh_download)
{
&logmsg('DEBUG', "Compressed log file, will use command: $uncompress \"$logf\"");
# Open a pipe to zcat program for compressed log
open($lfile, '-|', "$uncompress \"$logf\"") || localdie("FATAL: cannot read from pipe to $uncompress \"$logf\". $!\n");
}
else
{
if (!$sample_only)
{
if (!$http_download)
{
if ($ssh_download && $logf =~ m#^ssh://([^\/]+)/(.*)#i)
{
my $host_info = $1;
my $file = $2;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
&logmsg('DEBUG', "Compressed log file, will use command: $ssh $host_info \"$uncompress '$file'\"");
# Open a pipe to zcat program for compressed log
open($lfile, '-|', "$ssh $host_info \"$uncompress '$file'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"$uncompress '$logf'\". $!\n");
}
else
{
&logmsg('DEBUG', "Compressed log file, will use command: $remote_command \"$uncompress '$logf'\"");
# Open a pipe to zcat program for compressed log
open($lfile, '-|', "$remote_command \"$uncompress '$logf'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"$uncompress '$logf'\". $!\n");
}
}
else
{
&logmsg('DEBUG', "Retrieving log entries using command: $curl_command \"$logf\" | $uncompress |");
# Open a pipe to GET program
open($lfile, '-|', "$curl_command \"$logf\" | $uncompress") || localdie("FATAL: cannot read from pipe to $curl_command \"$logf\". $!\n");
}
}
elsif (!$http_download)
{
if ($ssh_download && $logf =~ m#^ssh://([^\/]+)/(.*)#i)
{
my $host_info = $1;
my $file = $2;
my $ssh = $ssh_command || 'ssh';
if ($host_info =~ s/:(\d+)$//) {
$host_info = "-p $1 $host_info";
}
$ssh .= " -i $ssh_identity" if ($ssh_identity);
$ssh .= " $ssh_options" if ($ssh_options);
&logmsg('DEBUG', "Compressed log file, will use command: $ssh $host_info \"$uncompress '$file'\"");
# Open a pipe to zcat program for compressed log
open($lfile, '-|', "$ssh $host_info \"$sample_cmd -m 100 '[1234567890]' '$file'\"") || localdie("FATAL: cannot read from pipe to $ssh $host_info \"$sample_cmd -m 100 '' '$file'\". $!\n");
}
else
{
&logmsg('DEBUG', "Compressed log file, will use command: $remote_command \"$uncompress '$logf'\"");
# Open a pipe to zcat program for compressed log
open($lfile, '-|', "$remote_command \"$sample_cmd -m 100 '[1234567890]' '$logf'\"") || localdie("FATAL: cannot read from pipe to $remote_command \"$sample_cmd -m 100 '' '$logf'\". $!\n");
}
}
else
{
# Open a pipe to GET program
open($lfile, '-|', "$curl_command --max-filesize 102400 \"$logf\" | $uncompress") || localdie("FATAL: cannot read from pipe to $curl_command --max-filesize 102400 \"$logf\" | $uncompress . $!\n");
}
}
}
return $lfile;
}
sub split_logfile
{
my $logf = shift;
my $totalsize = shift;
my $saved_pos = shift;
# CSV file can't be parsed using multiprocessing
return (0, -1) if ( $format eq 'csv' );
# Do not split the file if we don't know his size
return (0, -1) if ($totalsize <= 0);
my @chunks = (0);
# Seek to the last saved position
if ($last_parsed && $saved_pos) {
if ($saved_pos < $totalsize) {
$chunks[0] = $saved_pos;
}
}
# With small files < 16MB splitting is inefficient
if ($totalsize <= 16777216) {
return ($chunks[0], $totalsize);
}
my $i = 1;
my $lfile = &get_log_file($logf, $totalsize); # Get file handle to the file
if (defined $lfile)
{
while ($i < $queue_size)
{
my $pos = int(($totalsize/$queue_size) * $i);
if ($pos > $chunks[0])
{
$lfile->seek($pos, 0);
# Move the offset to the BEGINNING of each line, because
# the logic in process_file requires so
$pos= $pos + length(<$lfile>) - 1;
push(@chunks, $pos) if ($pos < $totalsize);
}
last if ($pos >= $totalsize);
$i++;
}
$lfile->close();
}
push(@chunks, $totalsize);
return @chunks;
}
# Return the week number of the year for a given date
sub get_week_number
{
my ($year, $month, $day) = @_;
# %U The week number of the current year as a decimal number, range 00 to 53, starting with the first
# Sunday as the first day of week 01.
# %V The ISO 8601 week number (see NOTES) of the current year as a decimal number, range 01 to 53,
# where week 1 is the first week that has at least 4 days in the new year.
# %W The week number of the current year as a decimal number, range 00 to 53, starting with the first
# Monday as the first day of week 01.
# Check if the date is valid first
my $datefmt = POSIX::strftime("%Y-%m-%d", 1, 1, 1, $day, $month - 1, $year - 1900);
if ($datefmt ne "$year-$month-$day") {
return -1;
}
my $weekNumber = '';
if (!$iso_week_number)
{
if (!$week_start_monday) {
$weekNumber = POSIX::strftime("%U", 1, 1, 1, $day, $month - 1, $year - 1900);
} else {
$weekNumber = POSIX::strftime("%W", 1, 1, 1, $day, $month - 1, $year - 1900);
}
}
else
{
$weekNumber = POSIX::strftime("%V", 1, 1, 1, $day, $month - 1, $year - 1900);
}
return sprintf("%02d", (!$iso_week_number) ? $weekNumber+1 : $weekNumber);
}
# Returns day number of the week of a given days
sub get_day_of_week
{
my ($year, $month, $day) = @_;
# %w The day of the week as a decimal, range 0 to 6, Sunday being 0.
my $weekDay = '';
if (!$week_start_monday && !$iso_week_number) {
# Start on sunday = 0
$weekDay = POSIX::strftime("%w", 1,1,1,$day,--$month,$year-1900);
} else {
# Start on monday = 1
$weekDay = POSIX::strftime("%w", 1,1,1,$day,--$month,$year-1900);
$weekDay = (($weekDay+7)-1) % 7;
}
return $weekDay;
}
# Returns all days following the week number
sub get_wdays_per_month
{
my $wn = shift;
my ($year, $month) = split(/\-/, shift);
my @months = ();
my @retdays = ();
$month ||= '01';
push(@months, "$year$month");
my $start_month = $month;
if ($month eq '01') {
unshift(@months, ($year - 1) . "12");
} else {
unshift(@months, $year . sprintf("%02d", $month - 1));
}
if ($month == 12) {
push(@months, ($year+1) . "01");
} else {
push(@months, $year . sprintf("%02d", $month + 1));
}
foreach my $d (@months)
{
$d =~ /^(\d{4})(\d{2})$/;
my $y = $1;
my $m = $2;
foreach my $day ("01" .. "31")
{
# Check if the date is valid first
my $datefmt = POSIX::strftime("%Y-%m-%d", 1, 1, 1, $day, $m - 1, $y - 1900);
if ($datefmt ne "$y-$m-$day") {
next;
}
my $weekNumber = '';
if (!$iso_week_number)
{
if (!$week_start_monday) {
$weekNumber = POSIX::strftime("%U", 1, 1, 1, $day, $m - 1, $y - 1900);
} else {
$weekNumber = POSIX::strftime("%W", 1, 1, 1, $day, $m - 1, $y - 1900);
}
}
else
{
$weekNumber = POSIX::strftime("%V", 1, 1, 1, $day, $m - 1, $y - 1900);
}
if (!$iso_week_number)
{
if ( ($weekNumber == $wn) || ( ($weekNumber eq '00') && (($wn == 1) || ($wn >= 52)) ) )
{
push(@retdays, "$year-$m-$day");
return @retdays if ($#retdays == 6);
}
}
else
{
if ( ($weekNumber == $wn) || ( ($weekNumber eq '01') && (($wn == 1) || ($wn >= 53)) ) )
{
push(@retdays, "$year-$m-$day");
return @retdays if ($#retdays == 6);
}
}
next if ($weekNumber > $wn);
}
}
return @retdays;
}
sub IsLeapYear
{
return ((($_[0] & 3) == 0) && (($_[0] % 100 != 0) || ($_[0] % 400 == 0)));
}
####
# Display calendar
####
sub get_calendar
{
my ($curdb, $year, $month) = @_;
my $str = "
\n";
my @wday = qw(Su Mo Tu We Th Fr Sa);
my @std_day = qw(Su Mo Tu We Th Fr Sa);
if ($week_start_monday || $iso_week_number) {
@wday = qw(Mo Tu We Th Fr Sa Su);
@std_day = qw(Mo Tu We Th Fr Sa Su);
}
my %day_lbl = ();
for (my $i = 0; $i <= $#wday; $i++) {
$day_lbl{$wday[$i]} = $wday[$i];
}
$str .= "
";
map { $str .= '
' . $day_lbl{$_} . '
'; } @wday;
$str .= "
\n\n";
my @currow = ('','','','','','','');
my $wd = 0;
my $wn = 0;
my $week = '';
my $dest_dir = $html_outdir || $outdir;
my $tmp_dir = "$dest_dir/$curdb";
$tmp_dir = $dest_dir if (!$report_per_database);
for my $d ("01" .. "31") {
last if (($d == 31) && grep(/^$month$/, '04','06','09','11'));
last if (($d == 30) && ($month eq '02'));
last if (($d == 29) && ($month eq '02') && !&IsLeapYear($year));
$wd = &get_day_of_week($year,$month,$d);
$wn = &get_week_number($year,$month,$d);
next if ($wn == -1);
if ( !-e "$tmp_dir/$year/$month/$d/index.html" ) {
$currow[$wd] = "
};
}
sub _gethostbyaddr
{
my $ip = shift;
my $host = undef;
unless(exists $CACHE_DNS{$ip}) {
eval {
local $SIG{ALRM} = sub { die "DNS lookup timeout.\n"; };
alarm($DNSLookupTimeout);
$host = gethostbyaddr(inet_aton($ip), AF_INET);
alarm(0);
};
if ($@) {
$CACHE_DNS{$ip} = undef;
#printf "_gethostbyaddr timeout : %s\n", $ip;
}
else {
$CACHE_DNS{$ip} = $host;
#printf "_gethostbyaddr success : %s (%s)\n", $ip, $host;
}
}
return $CACHE_DNS{$ip} || $ip;
}
sub localdie
{
my ($msg, $code) = @_;
my ($package, $filename, $line) = caller;
print STDERR "$msg - Error at line $line\n";
unlink("$PID_FILE");
exit($code || 1);
}
####
# Skip unwanted lines
# Return 1 when the line must be excluded
# Return 1 if we are after the end timestamp
# Return 0 if this is a wanted line
####
sub skip_unwanted_line
{
# do not report lines which are not at the specific times
if ($#exclude_time >= 0)
{
my $found = 0;
foreach (@exclude_time)
{
if ($prefix_vars{'t_timestamp'} =~ /$_/)
{
$found = 1;
last;
}
}
return 1 if ($found);
}
# Only reports lines that are at the specific times
if ($#include_time >= 0)
{
my $found = 0;
foreach (@include_time)
{
if ($prefix_vars{'t_timestamp'} !~ /$_/)
{
$found = 1;
last;
}
}
return 1 if ( $found );
}
# check begin or end time without the date
# and extract the hour here, late, to get the timezone
# already applied
if ( $from_hour || $to_hour )
{
return 1 if ( $from_hour && ( $from_hour gt $prefix_vars{'t_time'} ) );
return 1 if ( $to_hour && ( $to_hour lt $prefix_vars{'t_time'} ) );
}
else
{
# check against date/timestamp
return 1 if ($from && ($from gt $prefix_vars{'t_timestamp'}));
return -1 if ($to && ($to lt $prefix_vars{'t_timestamp'}));
}
return 0;
}
sub change_timezone
{
my ($y, $mo, $d, $h, $mi, $s) = @_;
my $t = timegm_nocheck($s, $mi, $h, $d, $mo-1, $y-1900);
$t += $log_timezone;
($s, $mi, $h, $d, $mo, $y) = gmtime($t);
return ($y+1900, sprintf("%02d", ++$mo), sprintf("%02d", $d), sprintf("%02d", $h), sprintf("%02d", $mi), sprintf("%02d", $s));
}
# Set the default extension and output format, load JSON Perl podule if required
# Force text output with normalized query list only and disable incremental report
# Set default filename of the output file
sub set_output_extension
{
my ($od, $of, $ext, $db) = @_;
if (!$ext)
{
if ($of =~ /\.bin/i)
{
$ext = 'binary';
}
elsif ($of =~ /\.json/i)
{
if (eval {require JSON::XS;1;} ne 1)
{
localdie("Can not save output in json format, please install Perl module JSON::XS first.\n");
}
else
{
JSON::XS->import();
}
$ext = 'json';
}
elsif ($of =~ /\.htm[l]*/i)
{
$ext = 'html';
}
elsif ($of)
{
$ext = 'txt';
}
else
{
$ext = 'html';
}
}
elsif (lc($ext) eq 'json')
{
if (eval {require JSON::XS;1;} ne 1)
{
localdie("Can not save output in json format, please install Perl module JSON::XS first.\n");
}
else
{
JSON::XS->import();
}
}
if ($dump_normalized_only || $dump_all_queries)
{
$ext = 'txt';
$incremental = 0;
$report_title = 'Normalized query report' if (!$report_title && !$dump_all_queries);
}
$of ||= 'out.' . $ext;
# Append the current database name to the output file.
$of = $db . '_' . $of if ($db && $report_per_database && $db ne $DBALL && $of ne '-');
# Add the destination directory
$of = $od . '/' . $of if ($od && $of !~ m#^\Q$od\E# && $of ne '-');
&logmsg('DEBUG', "Output '$ext' reports will be written to $of") if (!$rebuild && !$report_per_database);
return ($of, $ext);
}
# Set timezone
sub set_timezone
{
my $init = shift;
$timezone = ((0-$opt_timezone)*3600);
$log_timezone = ((0-$opt_log_timezone)*3600);
return if ($init);
if (!$timezone && !$isUTC) {
my @lt = localtime();
# count TimeZone and Daylight Saving Time
$timezone = timelocal(@lt) - timegm(@lt);
&logmsg('DEBUG', "timezone not specified, using $timezone seconds" );
}
}
sub maxlen_truncate
{
if ($maxlength && length($_[0]) > $maxlength)
{
$_[0] = substr($_[0], 0, $maxlength);
$_[0] =~ s/((?:[,\(=\~]|LIKE)\s*'[^']*)$/$1'/is if ($anonymize);
$_[0] .= '[...]';
}
}
# return 1 or 2 following the EOL type (CR, LF or CRLF)
sub get_eol_length
{
return length($LOG_EOL_TYPE)/2;
}
####
# Return the type of EOL of a file: Mac (CR), Windows (CRLF), Linux (LF)
# Assume LF by default
####
sub get_eol_type
{
my $file = shift;
if (open(my $in, '<', $file))
{
binmode($in); ## (added as an update)
my ( $cr, $lf, $crlf ) = ( 0 ) x 3;
my $c = '';
while ( read($in, $c, 65536, 1) )
{
$lf += $c=~ tr/\x0A/\x0A/;
$cr += $c =~ tr/\x0D/\x0D/;
$crlf += $c =~ s/\x0D\x0A/xx/g ;
$c = chop;
$cr-- if ( $c eq "\x0D" ); # a final CR or LF will get counted
$lf-- if ( $c eq "\x0A" ); # again on the next iteration
}
close($in);
$cr++ if ( $c eq "\x0D" );
$lf++ if ( $c eq "\x0A" );
if ($lf && !$cr) {
return 'LF';
} elsif ($cr && !$lf) {
return 'CR';
} elsif ($crlf) {
return 'CRLF';
}
}
else
{
print "WARNING: can't read file $file, $!\n";
}
return 'LF';
}
sub html_escape
{
my $toencode = shift;
return undef unless defined($toencode);
utf8::encode($toencode) if utf8::is_utf8($toencode);
$toencode=~s/([^a-zA-Z0-9_.~-])/uc sprintf("%%%02x",ord($1))/eg;
return $toencode;
}
sub dump_raw_csv
{
my $t_pid = shift;
my $sep = ';';
if ($csv_sep_char) {
$sep = $csv_sep_char;
}
# CSV columns information:
# ------------------------
# timestamp without milliseconds
# username
# database name
# Process id
# Remote host
# session id
# Error severity
# SQL state code
# Query duration
# user query / error message
# bind parameters
# application name
# backend type
# query id
print "timestamp${sep}username${sep}dbname${sep}pid${sep}client${sep}sessionid${sep}loglevel${sep}sqlstate${sep}duration${sep}query/error${sep}parameters${sep}appname${sep}backendtype${sep}queryid\n" if (!$header_done);
$header_done = 1;
print "$cur_info{$t_pid}{timestamp}$cur_info{$t_pid}{ms}${sep}$cur_info{$t_pid}{dbuser}${sep}$cur_info{$t_pid}{dbname}${sep}";
print "$cur_info{$t_pid}{pid}${sep}$cur_info{$t_pid}{dbclient}${sep}$cur_info{$t_pid}{session}${sep}";
print "$cur_info{$t_pid}{loglevel}${sep}$cur_info{$t_pid}{sqlstate}${sep}$cur_info{$t_pid}{duration}${sep}";
my $query = ($cur_info{$t_pid}{query} || $cur_lock_info{$t_pid}{query} || $cur_temp_info{$t_pid}{query}
|| $cur_cancel_info{$t_pid}{query} || "plan:\n" .$cur_plan_info{$t_pid}{plan});
$query =~ s/[\r\n]/\\n/gs;
if ($query =~ /${sep}/) {
$query =~ s/"/""/g;
$query = '"' . $query . '"';
}
print $query . "${sep}";
($cur_info{$t_pid}{parameters}) ? print "\"$cur_info{$t_pid}{parameters}\"${sep}" : print "${sep}";
($cur_info{$t_pid}{dbappname}) ? print "\"$cur_info{$t_pid}{dbappname}\"${sep}" : print "${sep}";
print "$cur_info{$t_pid}{backendtype}${sep}$cur_info{$t_pid}{queryid}\n";
}
# Inclusion of Perl package pgFormatter::Beautify.
{
package pgFormatter::Beautify;
use strict;
use warnings;
use warnings qw( FATAL );
use Encode qw( decode );
use utf8;
binmode STDIN, ':utf8';
binmode STDOUT, ':utf8';
binmode STDERR, ':utf8';
use Text::Wrap;
our $DEBUG = 0;
our $DEBUG_SP = 0;
# PostgreSQL functions that use a FROM clause
our @have_from_clause = qw( extract overlay substring trim );
our @extract_keywords = qw(century day decade dow doy epoch hour isodow isoyear microseconds millennium minute month quarter second timezone timezone_minute week year);
our $math_operators = qr{^(?:\+|\-|\*|\/|\%|\^|\|\/|\|\|\/|\!|\!\!|\@|\&|\||\#|\~|<<|>>)$};
=head1 NAME
pgFormatter::Beautify - Library for pretty-printing SQL queries
=head1 VERSION
Version 5.5
=cut
# Version of pgFormatter
our $VERSION = '5.5';
# Inclusion of code from Perl package SQL::Beautify
# Copyright (C) 2009 by Jonas Kramer
# Published under the terms of the Artistic License 2.0.
=head1 SYNOPSIS
This module can be used to reformat given SQL query, optionally anonymizing parameters.
Output can be either plain text, or it can be HTML with appropriate styles so that it can be displayed on a web page.
Example usage:
my $beautifier = pgFormatter::Beautify->new();
$beautifier->query( 'select a,b,c from d where e = f' );
$beautifier->beautify();
my $nice_txt = $beautifier->content();
$beautifier->format('html');
$beautifier->beautify();
my $nice_html = $beautifier->content();
$beautifier->format('html');
$beautifier->anonymize();
$beautifier->beautify();
my $nice_anonymized_html = $beautifier->content();
$beautifier->format();
$beautifier->beautify();
$beautifier->wrap_lines()
my $wrapped_txt = $beautifier->content();
=head1 FUNCTIONS
=head2 new
Generic constructor - creates object, sets defaults, and reads config from given hash with options.
Takes options as hash. Following options are recognized:
=over
=item * break - String that is used for linebreaks. Default is "\n".
=item * colorize - if set to false CSS style will not be applied to html output. Used internally to display errors in CGI mode withour style.
=item * comma - set comma at beginning or end of a line in a parameter list
=over
=item end - put comma at end of the list (default)
=item start - put comma at beginning of the list
=back
=item * comma_break - add new-line after each comma in INSERT statements
=item * format - set beautify format to apply to the content (default: text)
=over
=item text - output content as plain/text (command line mode default)
=item html - output text/html with CSS style applied to content (CGI mode default)
=back
=item * functions - list (arrayref) of strings that are function names
=item * keywords - list (arrayref) of strings that are keywords
=item * multiline - use multi-line search for placeholder regex, see placeholder.
=item * no_comments - if set to true comments will be removed from query
=item * no_grouping - if set to true statements will not be grouped in a transaction, an extra newline character will be added between statements like outside a transaction.
=item * placeholder - use the specified regex to find code that must not be changed in the query.
=item * query - query to beautify
=item * rules - hash of rules - uses rule semantics from SQL::Beautify
=item * space - character(s) to be used as space for indentation
=item * spaces - how many spaces to use for indentation
=item * uc_functions - what to do with function names:
=over
=item 0 - do not change
=item 1 - change to lower case
=item 2 - change to upper case
=item 3 - change to Capitalized
=back
=item * separator - string used as dynamic code separator, default is single quote
=item * uc_keywords - what to do with keywords - meaning of value like with uc_functions
=item * uc_types - what to do with data types - meaning of value like with uc_functions
=item * wrap - wraps given keywords in pre- and post- markup. Specific docs in SQL::Beautify
=item * format_type - try an other formatting
=item * wrap_limit - wrap queries at a certain length
=item * wrap_after - number of column after which lists must be wrapped
=item * wrap_comment - apply wrapping to comments starting with --
=item * numbering - statement numbering as a comment before each query
=item * redshift - add Redshift keywords (obsolete, use --extra-keyword)
=item * no_extra_line - do not add an extra empty line at end of the output
=item * keep_newline - preserve empty line in plpgsql code
=item * no_space_function - remove space before function call and open parenthesis
=back
For defaults, please check function L.
=cut
sub new
{
my $class = shift;
my %options = @_;
our @have_from_clause = qw( extract overlay substring trim );
our @extract_keywords = qw(century day decade dow doy epoch hour isodow isoyear microseconds millennium minute month quarter second timezone timezone_minute week year);
our $math_operators = qr{^(?:\+|\-|\*|\/|\%|\^|\|\/|\|\|\/|\!|\!\!|\@|\&|\||\#|\~|<<|>>)$};
my $self = bless {}, $class;
$self->set_defaults();
for my $key ( qw( query spaces space break wrap keywords functions rules uc_keywords uc_functions uc_types no_comments no_grouping placeholder multiline separator comma comma_break format colorize format_type wrap_limit wrap_after wrap_comment numbering redshift no_extra_line keep_newline no_space_function)) {
$self->{ $key } = $options{ $key } if defined $options{ $key };
}
$self->_refresh_functions_re();
# Make sure "break" is sensible
$self->{ 'break' } = ' ' if $self->{ 'spaces' } == 0;
# Initialize internal stuff.
$self->{ '_level' } = 0;
# Array to store placeholders values
@{ $self->{ 'placeholder_values' } } = ();
# Hash to store dynamic code
%{ $self->{ 'dynamic_code' } } = ();
# Hash to store and preserve constants
%{ $self->{ 'keyword_constant' } } = ();
# Hash to store and preserve aliases between double quote
%{ $self->{ 'alias_constant' } } = ();
# Check comma value, when invalid set to default: end
if (lc($self->{ 'comma' }) ne 'start') {
$self->{ 'comma' } = 'end';
} else {
$self->{ 'comma' } = lc($self->{ 'comma' });
}
$self->{ 'format' } //= 'text';
$self->{ 'colorize' } //= 1;
$self->{ 'format_type' } //= 0;
$self->{ 'wrap_limit' } //= 0;
$self->{ 'wrap_after' } //= 0;
$self->{ 'wrap_comment' } //= 0;
$self->{ 'no_extra_line' } //= 0;
return $self;
}
=head2 query
Accessor to query string. Both reads:
$object->query()
, and writes
$object->query( $something )
=cut
sub query
{
my $self = shift;
my $new_value = shift;
$self->{ 'query' } = $new_value if defined $new_value;
$self->{idx_code} = 0;
# Replace any COMMENT constant between single quote
while ($self->{ 'query' } =~ s/IS\s+([EU]*'(?:[^;]*)')\s*;/IS TEXTVALUE$self->{idx_code};/is)
{
$self->{dynamic_code}{$self->{idx_code}} = $1;
$self->{dynamic_code}{$self->{idx_code}} =~ s/([\n\r])\s+([EU]*'(?:[^']*)')/$1 . ($self->{ 'space' } x $self->{ 'spaces' }) . $2/gsei;
$self->{idx_code}++;
}
# Replace any \\ by BSLHPGF
$self->{ 'query' } =~ s/\\\\/BSLHPGF/sg;
# Replace any \' by PGFBSLHQ
$self->{ 'query' } =~ s/\\'/PGFBSLHQ/sg;
# Replace any '' by PGFESCQ1
while ($self->{ 'query' } =~ s/([^'])''([^'])/$1PGFESCQ1$2/s) {};
# Replace any '''' by PGFESCQ1PGFESCQ1
while ($self->{ 'query' } =~ s/([^'])''''([^'])/$1PGFESCQ1PGFESCQ1$2/s) {};
# Replace any '...''' by '.*PGFESCQ1'
while ($self->{ 'query' } =~ s/([^']'[^']+)''('[^'])/$1PGFESCQ1$2/s) {};
# Replace any '''...' by 'PGFESCQ1.*'
while ($self->{ 'query' } =~ s/([^']')''([^']+'[^'])/$1PGFESCQ1$2/s) {};
# Replace any multiline '''...''' by 'PGFESCQ1...PGFESCQ1'
$self->{ 'query' } =~ s/([^']')''([^']*)''('[^']|$)/$1PGFESCQ1$2PGFESCQ1$3/sg;
# Replace any "" by PGFESCQ2
while ($self->{ 'query' } =~ s/([^"])""([^"])/$1PGFESCQ2$2/s) {};
# Replace aliases using double quote
my $j = 0;
while ($self->{ 'query' } =~ s/(\s+AS\s*)("+[^"]+"+)/$1PGFALIAS$j/is)
{
$self->{ 'alias_constant' }{$j} = $2;
$j++;
}
# replace all constant between quote
$j = 0;
while ($self->{ 'query' } =~ s/('[^'\n\r]+')/AAKEYWCONST${j}AA/s)
{
$self->{ 'keyword_constant' }{$j} = $1;
$j++;
}
# Fix false positive generated by code above.
while ($self->{ 'query' } =~ s/(\s+AS\s+)AAKEYWCONST(\d+)AA/$1$self->{ 'keyword_constant' }{$2}/is) {
delete $self->{ 'keyword_constant' }{$2};
};
# Hide content of format() function when the code separator is not a single quote */
my $i = 0;
while ($self->{ 'query' } =~ s/\bformat\((\$(?:.*)?\$\s*)([,\)])/format\(CODEPARTB${i}CODEPARTB$2/i) {
push(@{ $self->{ 'placeholder_values' } }, $1);
$i++;
}
my %temp_placeholder = ();
my @temp_content = split(/(CREATE(?:\s+OR\s+REPLACE)?\s+(?:FUNCTION|PROCEDURE)\s+)/i, $self->{ 'query' });
if ($#temp_content > 0)
{
for (my $j = 0; $j <= $#temp_content; $j++)
{
next if ($temp_content[$j] =~ /^CREATE/i or $temp_content[$j] eq '');
# Replace single quote code delimiter into $PGFDLM$
if ($temp_content[$j] !~ s/(\s+AS\s+)'(\s+.*?;\s*)'/$1\$PGFDLM\$$2\$PGFDLM\$/is)
{
$temp_content[$j] =~ s/(\s+AS\s+)'(\s+.*?END[;]*\s*)'/$1\$PGFDLM\$$2\$PGFDLM\$/is;
}
# Remove any call too CREATE/DROP LANGUAGE to not break search of function code separator
$temp_content[$j] =~ s/(CREATE|DROP)\s+LANGUAGE\s+[^;]+;.*//is;
# Fix case where code separator with $ is associated to begin/end keywords
$temp_content[$j] =~ s/([^\s]+\$)(BEGIN\s)/$1 $2/igs;
$temp_content[$j] =~ s/(\sEND)(\$[^\s]+)/$1 $2/igs;
$temp_content[$j] =~ s/(CREATE|DROP)\s+LANGUAGE\s+[^;]+;.*//is;
my $fctname = '';
if ($temp_content[$j] =~ /^([^\s\(]+)/) {
$fctname = lc($1);
}
next if (!$fctname);
my $language = 'sql';
if ($temp_content[$j] =~ /\s+LANGUAGE\s+[']*([^'\s;]+)[']*/is)
{
$language = lc($1);
if ($language =~ /AAKEYWCONST(\d+)AA/i)
{
$language = lc($self->{ 'keyword_constant' }{$1});
$language =~ s/'//g;
}
}
if ($language =~ /^internal$/i)
{
if ($temp_content[$j] =~ s/AS ('[^\']+')/AS CODEPARTB${i}CODEPARTB/is)
{
push(@{ $self->{ 'placeholder_values' } }, $1);
$i++;
}
}
# C function language with AS obj_file, link_symbol
elsif ($language =~ /^c$/i)
{
if ($temp_content[$j] =~ s/AS ('[^\']+')\s*,\s*('[^\']+')/AS CODEPARTB${i}CODEPARTB/is)
{
push(@{ $self->{ 'placeholder_values' } }, "$1, $2");
$i++;
}
}
# if the function language is not SQL or PLPGSQL
elsif ($language !~ /^(?:plpg)?sql$/)
{
# Try to find the code separator
my $tmp_str = $temp_content[$j];
while ($tmp_str =~ s/\s+AS\s+([^\s]+)\s+//is)
{
my $code_sep = quotemeta($1);
foreach my $k (@{ $self->{ 'keywords' } }) {
last if ($code_sep =~ s/\b$k$//i);
}
next if (!$code_sep);
if ($tmp_str =~ /\s+$code_sep[\s;]+/)
{
while ( $temp_content[$j] =~ s/($code_sep(?:.+?)$code_sep)/CODEPART${i}CODEPART/s)
{
push(@{ $self->{ 'placeholder_values' } }, $1);
$i++;
}
last;
}
}
}
}
}
$self->{ 'query' } = join('', @temp_content);
# Store values of code that must not be changed following the given placeholder
if ($self->{ 'placeholder' })
{
if (!$self->{ 'multiline' })
{
while ( $self->{ 'query' } =~ s/($self->{ 'placeholder' })/PLACEHOLDER${i}PLACEHOLDER/)
{
push(@{ $self->{ 'placeholder_values' } }, $1);
$i++;
}
}
else
{
while ( $self->{ 'query' } =~ s/($self->{ 'placeholder' })/PLACEHOLDER${i}PLACEHOLDER/s)
{
push(@{ $self->{ 'placeholder_values' } }, $1);
$i++;
}
}
}
# Replace dynamic code with placeholder
$self->_remove_dynamic_code( \$self->{ 'query' }, $self->{ 'separator' } );
# Replace operator with placeholder
$self->_quote_operator( \$self->{ 'query' } );
# Replace comment with not quote delimiter with placeholder
$self->_quote_comment_stmt( \$self->{ 'query' } );
return $self->{ 'query' };
}
=head2 content
Accessor to content of results. Must be called after $object->beautify().
This can be either plain text or html following the format asked by the
client with the $object->format() method.
=cut
sub content
{
my $self = shift;
my $new_value = shift;
$self->{ 'content' } = $new_value if defined $new_value;
$self->{ 'content' } =~ s/\(\s+\(/\(\(/gs;
# Replace placeholders with their original dynamic code
$self->_restore_dynamic_code( \$self->{ 'content' } );
# Replace placeholders with their original operator
$self->_restore_operator( \$self->{ 'content' } );
# Replace placeholders with their original string
$self->_restore_comment_stmt( \$self->{ 'content' } );
# Replace placeholders by their original values
if ($#{ $self->{ 'placeholder_values' } } >= 0)
{
$self->{ 'content' } =~ s/PLACEHOLDER(\d+)PLACEHOLDER/$self->{ 'placeholder_values' }[$1]/igs;
$self->{ 'content' } =~ s/CODEPART[B]*(\d+)CODEPART[B]*/$self->{ 'placeholder_values' }[$1]/igs;
}
$self->{ 'content' } =~ s/PGFALIAS(\d+)/$self->{ 'alias_constant' }{$1}/gs;
while ( $self->{ 'content' } =~ s/AAKEYWCONST(\d+)AA/$self->{ 'keyword_constant' }{$1}/s ) {
delete $self->{ 'keyword_constant' }{$1};
};
# Replace any BSLHPGF by \\
$self->{ 'content' } =~ s/BSLHPGF/\\\\/g;
# Replace any PGFBSLHQ by \'
$self->{ 'content' } =~ s/PGFBSLHQ/\\'/g;
# Replace any $PGFDLM$ by code delimiter '
$self->{ 'content' } =~ s/\$PGFDLM\$/'/g;
# Replace any PGFESCQ1 by ''
$self->{ 'content' } =~ s/PGFESCQ1/''/g;
# Replace any PGFESCQ2 by ""
$self->{ 'content' } =~ s/PGFESCQ2/""/g;
return $self->{ 'content' };
}
=head2 highlight_code
Makes result html with styles set for highlighting.
=cut
sub highlight_code
{
my ($self, $token, $last_token, $next_token) = @_;
# Do not use uninitialized variable
$last_token //= '';
$next_token //= '';
# Colorize operators
while ( my ( $k, $v ) = each %{ $self->{ 'dict' }->{ 'symbols' } } ) {
if ($token eq $k) {
$token = '' . $v . '';
return $token;
}
}
# lowercase/uppercase keywords taking care of function with same name
if ( $self->_is_keyword( $token, $next_token, $last_token ) && (!$self->_is_function( $token, $last_token, $next_token ) || $next_token ne '(') ) {
if ( $self->{ 'uc_keywords' } == 1 ) {
$token = '' . $token . '';
} elsif ( $self->{ 'uc_keywords' } == 2 ) {
$token = '' . $token . '';
} elsif ( $self->{ 'uc_keywords' } == 3 ) {
$token = '' . $token . '';
} else {
$token = '' . $token . '';
}
return $token;
}
# lowercase/uppercase known functions or words followed by an open parenthesis
# if the token is not a keyword, an open parenthesis or a comment
if (($self->_is_function( $token, $last_token, $next_token ) && $next_token eq '(')
|| (!$self->_is_keyword( $token, $next_token, $last_token ) && !$next_token eq '('
&& $token ne '(' && !$self->_is_comment( $token )) ) {
if ($self->{ 'uc_functions' } == 1) {
$token = '' . $token . '';
} elsif ($self->{ 'uc_functions' } == 2) {
$token = '' . $token . '';
} elsif ($self->{ 'uc_functions' } == 3) {
$token = '' . $token . '';
} else {
$token = '' . $token . '';
}
return $token;
}
# Colorize STDIN/STDOUT in COPY statement
if ( grep(/^\Q$token\E$/i, @{ $self->{ 'dict' }->{ 'copy_keywords' } }) ) {
if ($self->{ 'uc_keywords' } == 1) {
$token = '' . $token . '';
} elsif ($self->{ 'uc_keywords' } == 2) {
$token = '' . $token . '';
} elsif ($self->{ 'uc_keywords' } == 3) {
$token = '' . $token . '';
} else {
$token = '' . $token . '';
}
return $token;
}
# Colorize parenthesis
if ( grep(/^\Q$token\E$/i, @{ $self->{ 'dict' }->{ 'brackets' } }) ) {
$token = '' . $token . '';
return $token;
}
# Colorize comment
if ( $self->_is_comment( $token ) ) {
$token = '' . $token . '';
return $token;
}
# Colorize numbers
$token =~ s/\b(\d+)\b/$1<\/span>/igs;
# Colorize string
$token =~ s/('.*?(?$1<\/span>/gs;
$token =~ s/(`[^`]*`)/$1<\/span>/gs;
return $token;
}
=head2 tokenize_sql
Splits input SQL into tokens
Code lifted from SQL::Beautify
=cut
sub tokenize_sql
{
my $self = shift;
my $query = shift;
$query ||= $self->{ 'query' };
# just in case it has not been called in the main script
$query = $self->query() if (!$query);
my $re = qr{
(
(?:\\(?:copyright|errverbose|gx|gexec|gset|gdesc|q|crosstabview|watch|\?|copy|qecho|echo|if|elif|else|endif|edit|ir|include_relative|include|warn|write|html|print|out|ef|ev|h|H|i|p|r|s|w|o|e|g|q|d(?:[aAbcCdDeEfFgilLmnoOpPrRstTuvwxy+]{0,3})?|l\+?|sf\+?|sv\+?|z|a|C|f|H|t|T|x|c|pset|connect|encoding|password|conninfo|cd|setenv|timing|prompt|reset|set|unset|lo_export|lo_import|lo_list|lo_unlink|\!))(?:$|[\n]|[\ \t](?:(?!\\(?:\\|pset|reset|connect|encoding|password|conninfo|cd|setenv|timing|prompt|set|unset|lo_export|lo_import|lo_list|lo_unlink|\!|copy|qecho|echo|edit|html|include_relative|include|print|out|warn|watch|write|q))[\ \t\S])*) # psql meta-command
|
AAKEYWCONST\d+AA(?:\s+AAKEYWCONST\d+AA)+ # preserved multiple constants with newline
|
AAKEYWCONST\d+AA # preserved constants
|
\/\/ # mysql delimiter ( $$ is handled later with PG code delimiters )
|
(?:COPY\s+[^\s]+\s+\((?:.*?)\\\.) # COPY and its content
|
[^\s\(,]+\%(?:ROWTYPE|TYPE) # single line comments
|
(?:\s*--)[\ \t\S]* # single line comments
|
(?:\-\|\-) # range operator "is adjacent to"
|
(?:<\%|\%>|<<\->|<\->>|<\->) # pg_trgm and some geometry operators
|
(?:\->>|\->|<\#>|\#>>|\#>|\?\&|\?\||\?|\@\?) # Vector and Json Operators
|
(?:\#<=|\#>=|\#<>|\#<|\#=) # compares tinterval and reltime
|
(?:>>=|<<=) # inet operators
|
(?:!!|\@\@\@) # deprecated factorial and full text search operators
|
(?:\|\|\/|\|\/) # square root and cube root
|
(?:\@\-\@|\@\@|\#\#|<<\||\|>>|\&<\||\&<|\|\&>|\&>|<\^|>\^|\?\#|\#|\?<\||\?\-\||\?\-|\?\|\||\?\||\@>|<\@|\~=)
# Geometric Operators
|
(?:~<=~|~>=~|~>~|~<~) # string comparison for pattern matching operator families
|
(?:!~~|!~~\*|~~\*|~~) # LIKE operators
|
(?:!~\*|!~|~\*) # regular expression operators
|
(?:\*=|\*<>|\*<=|\*>=|\*<|\*>) # composite type comparison operators
|
(?:\d+e[\+\-]\d+) # signed exponents
|
(?:<>|<=>|>=|<=|=>|==|!=|:=|=|!|<<|>>|<|>|\|\||\||&&|&|\-|\+|\*(?!/)|/(?!\*)|\%|~|\^|\?) # operators and tests
|
[\[\]\(\),;.] # punctuation (parenthesis, comma)
|
\"\"(?!\"") # empty double quoted string
|
"[^\"\s]+"\.[^\"\s\(\)=<>!~\*&:\|\-\+\%\^\?\@\#\[\]\{\}\.,;']+ # fqdn identifier form "schema".table or "table".column
|
[^\"\s=<>!~\*&\(\):\|\-\+\%\^\?\@\#\[\]\{\}\.,;']+\."[^\"\s]+" # fqdn identifier form schema."table" or table."column"
|
"[^\"\s]+"\."[^\"\s]+" # fqdn identifier form "schema"."table" or "table"."column"
|
"(?>(?:(?>[^"\\]+)|""|\\.)*)+" # anything inside double quotes, ungreedy
|
`(?>(?:(?>[^`\\]+)|``|\\.)*)+` # anything inside backticks quotes, ungreedy
|
[EB]*'[^']+' # anything inside single quotes, ungreedy.
|
/\*[\ \t\r\n\S]*?\*/ # C style comments
|
(?:[\w:\@]+[\$]*[\w:\@]*(?:\.(?:\w+|\*)?)*) # words, standard named placeholders, db.table.*, db.*
|
(?:\$\w+\$)
|
(?: \$_\$ | \$\d+ | \${1,2} | \$\w+\$) # dollar expressions - eg $_$ $3 $$ $BODY$
|
(?:\r\n){2,} # empty line Windows
|
\n{2,} # empty line Unix
|
\r{2,} # empty line Mac
|
[\t\ ]+ # any kind of white spaces
|
[^\s\*\/\-\\;:,]+ # anything else
)
}ismx;
my @query = grep { /\S/ } $query =~ m{$re}simxg;
if ($self->{ 'keep_newline' }) {
@query = grep { /(?:\S|^[\r\n]+$)/ } $query =~ m{$re}simxg;
}
# Revert position when a comment is before a comma
if ($self->{ 'comma' } eq 'end')
{
for (my $i = 0; $i < ($#query - 1); $i++)
{
if ($query[$i+1] eq ',' and $self->_is_comment($query[$i]))
{
$query[$i+1] = $query[$i];
$query[$i] = ',';
}
}
}
# Fix token split of negative numbers
if ($#query > 2)
{
for (my $i = 2; $i <= $#query; $i++)
{
if ($query[$i] =~ /^[\d\.]+$/ && $query[$i-1] =~ /^[\+\-]$/
and ($query[$i-2] =~ /$math_operators/ or $query[$i-2] =~ /^(?:,|\(|\[)$/
or $self->_is_keyword( $query[$i-2]))
)
{
$query[$i] = $query[$i-1] . $query[$i];
$query[$i-1] = '';
}
}
}
@query = grep(!/^$/, @query);
#print STDERR "DEBUG KWDLIST: ", join(' | ', @query), "\n";
return @query if (wantarray);
$self->{ '_tokens' } = \@query;
}
sub _pop_level
{
my ($self, $token, $last_token) = @_;
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_POP: line: $line => last=", ($last_token||''), ", token=$token\n";
}
return 0 if ($#{ $self->{ '_level_stack' } } == -1);
return pop( @{ $self->{ '_level_stack' } } ) || 0;
}
sub _reset_level
{
my ($self, $token, $last_token) = @_;
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_RESET: line: $line => last=", ($last_token||''), ", token=$token\n";
}
@{ $self->{ '_level_stack' } } = ();
$self->{ '_level' } = 0;
$self->{ 'break' } = ' ' unless ( $self->{ 'spaces' } != 0 );
}
sub _set_level
{
my ($self, $position, $token, $last_token) = @_;
return 0 if (not defined $position);
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_SET: line: $line => position=$position, last=", ($last_token||''), ", token=$token\n";
}
$self->{ '_level' } = ($position >= 0) ? $position : 0;
}
sub _push_level
{
my ($self, $position, $token, $last_token) = @_;
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_PUSH: line: $line => position=$position, last=", ($last_token||''), ", token=$token\n";
}
push(@{ $self->{ '_level_stack' } }, (($position >= 0) ? $position : 0));
}
sub _set_last
{
my ($self, $token, $last_token) = @_;
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_LAST: line: $line => last=", ($last_token||''), ", token=$token\n";
}
return $token;
}
=head2 beautify
Beautify SQL.
After calling this function, $object->content() will contain nicely indented result.
Code lifted from SQL::Beautify
=cut
sub beautify
{
my $self = shift;
# Use to store the token position in the array
my $pos = 0;
# Main variables used to store differents state
$self->content( '' );
$self->{ '_level' } = 0;
$self->{ '_level_stack' } = [];
$self->{ '_level_parenthesis' } = [];
$self->{ '_new_line' } = 1;
$self->{ '_current_sql_stmt' } = '';
$self->{ '_is_meta_command' } = 0;
$self->{ '_fct_code_delimiter' } = '';
$self->{ '_language_sql' } = 0;
$self->{ '_first_when_in_case' } = 0;
$self->{ '_is_in_if' } = 0;
$self->{ '_is_in_conversion' } = 0;
$self->{ '_is_in_case' } = 0;
$self->{ '_is_in_where' } = 0;
$self->{ '_is_in_from' } = 0;
$self->{ '_is_in_join' } = 0;
$self->{ '_is_in_create' } = 0;
$self->{ '_is_in_create_schema' } = 0;
$self->{ '_is_in_rule' } = 0;
$self->{ '_is_in_create_function' } = 0;
$self->{ '_is_in_drop_function' } = 0;
$self->{ '_is_in_alter' } = 0;
$self->{ '_is_in_trigger' } = 0;
$self->{ '_is_in_publication' } = 0;
$self->{ '_is_in_call' } = 0;
$self->{ '_is_in_type' } = 0;
$self->{ '_is_in_domain' } = 0;
$self->{ '_is_in_declare' } = 0;
$self->{ '_is_in_block' } = -1;
$self->{ '_is_in_work' } = 0;
$self->{ '_is_in_function' } = 0;
$self->{ '_current_function' } = '';
$self->{ '_is_in_statistics' } = 0;
$self->{ '_is_in_cast' } = 0;
$self->{ '_is_in_procedure' } = 0;
$self->{ '_is_in_index' } = 0;
$self->{ '_is_in_with' } = 0;
$self->{ '_is_in_explain' } = 0;
$self->{ '_is_in_overlaps' } = 0;
$self->{ '_parenthesis_level' } = 0;
$self->{ '_parenthesis_function_level' } = 0;
$self->{ '_has_order_by' } = 0;
$self->{ '_is_in_order_by' } = 0;
$self->{ '_has_over_in_join' } = 0;
$self->{ '_insert_values' } = 0;
$self->{ '_is_in_constraint' } = 0;
$self->{ '_is_in_distinct' } = 0;
$self->{ '_is_in_array' } = 0;
$self->{ '_is_in_filter' } = 0;
$self->{ '_parenthesis_filter_level' } = 0;
$self->{ '_is_in_within' } = 0;
$self->{ '_is_in_grouping' } = 0;
$self->{ '_is_in_partition' } = 0;
$self->{ '_is_in_over' } = 0;
$self->{ '_is_in_policy' } = 0;
$self->{ '_is_in_truncate' } = 0;
$self->{ '_is_in_using' } = 0;
$self->{ '_and_level' } = 0;
$self->{ '_col_count' } = 0;
$self->{ '_is_in_drop' } = 0;
$self->{ '_is_in_operator' } = 0;
$self->{ '_is_in_exception' } = 0;
$self->{ '_is_in_sub_query' } = 0;
$self->{ '_is_in_fetch' } = 0;
$self->{ '_is_in_aggregate' } = 0;
$self->{ '_is_in_value' } = 0;
$self->{ '_parenthesis_level_value' } = 0;
$self->{ '_parenthesis_with_level' } = 0;
$self->{ '_is_in_returns_table' } = 0;
$self->{ '_has_limit' } = 0;
$self->{ '_not_a_type' } = 0;
$self->{ 'stmt_number' } = 1;
$self->{ '_is_subquery' } = 0;
$self->{ '_mysql_delimiter' } = '';
$self->{ '_is_in_generated' } = 0;
$self->{ '_is_in_between' } = 0;
$self->{ '_is_in_materialized' } = 0;
@{ $self->{ '_begin_level' } } = ();
my $last = '';
$self->tokenize_sql();
$self->{ 'content' } .= "-- Statement # $self->{ 'stmt_number' }\n" if ($self->{ 'numbering' } and $#{ $self->{ '_tokens' } } > 0);
while ( defined( my $token = $self->_token ) )
{
my $rule = $self->_get_rule( $token );
if ($self->{ 'keep_newline' } and $self->{ '_is_in_block' } >= 0 and $token =~ /^[\r\n]+$/s
and defined $last and $last eq ';'
)
{
$self->_add_token( $token, $last );
next;
}
# Replace concat operator found in some SGBD into || for normalization
if (lc($token) eq 'concat' && defined $self->_next_token() && $self->_next_token ne '(') {
$token = '||';
}
# Case where a keyword is used as a column name.
if ( $self->{ '_is_in_create' } > 1 and $self->_is_keyword( $token, $self->_next_token(), $last )
and defined $self->_next_token and $self->_is_type($self->_next_token))
{
$self->_add_token($token, $last);
$last = $self->_set_last($token, $last);
next;
}
# COPY block
if ( $token =~ /^COPY\s+[^\s]+\s+\(/i )
{
$self->_new_line($token,$last);
$self->_add_token($token, $last);
$self->_new_line($token,$last);
$self->{ 'content' } .= "\n";
$last = $self->_set_last($token, $last);
next;
}
if (uc($token) eq 'BETWEEN')
{
$self->{ '_is_in_between' } = 1;
$self->_add_token($token, $last);
$last = $self->_set_last($token, $last);
next;
}
# mark when we are processing a materialized view to avoid formating issue with parameters
if (uc($token) eq 'MATERIALIZED' and uc($self->_next_token) eq 'VIEW') {
$self->{ '_is_in_materialized' } = 1;
}
####
# Find if the current keyword is a known function name
####
if (defined $last && $last && defined $self->_next_token and $self->_next_token eq '(')
{
my $word = lc($token);
$word =~ s/^[^\.]+\.//;
$word =~ s/^:://;
if (uc($last) eq 'FUNCTION' and $token =~ /^\d+$/) {
$self->{ '_is_in_function' }++;
} elsif ($word && exists $self->{ 'dict' }->{ 'pg_functions' }{$word}) {
$self->{ '_current_function' } = $word;
$self->{ '_is_in_function' }++ if ($self->{ '_is_in_create' } != 1 or $token =~ /^CAST$/i);
# Try to detect user defined functions
} elsif ($last ne '*' and !$self->_is_keyword($token, $self->_next_token(), $last)
and (exists $self->{ 'dict' }->{ 'symbols' }{ $last }
or $last =~ /^\d+$/)
)
{
$self->{ '_is_in_function' }++;
} elsif (uc($token) eq 'IN' and $self->{ '_tokens' }[1] !~ /^(SELECT|WITH|VALUES)$/i) {
$self->{ '_is_in_function' }++;
# try to detect if this is a user function
} elsif (!$self->{ '_is_in_function' } and !$self->{ '_is_in_create' }
and !$self->_is_comment($token) and length($token) > 2 # lazy exclusion of operators/comma
and $last !~ /^(?:AS|RECURSIVE|WITH|OPERATOR|INTO|TYPE|VIEW)/i
and !$self->_is_keyword($token, $self->_next_token(), $last))
{
$self->{ '_is_in_function' }++;
}
}
####
# Set open parenthesis position to know if we
# are in subqueries or function parameters
####
if ( $token eq ')')
{
$self->{ '_parenthesis_filter_level' }-- if ($self->{ '_parenthesis_filter_level' });
$self->{ '_parenthesis_with_level' }-- if ($self->{ '_parenthesis_with_level' });
$self->{ '_is_in_filter' } = 0 if (!$self->{ '_parenthesis_filter_level' });
if (!$self->{ '_is_in_function' }) {
$self->{ '_parenthesis_level' }-- if ($self->{ '_parenthesis_level' } > 0);
} else {
$self->{ '_parenthesis_function_level' }-- if ($self->{ '_parenthesis_function_level' } > 0);
if (!$self->{ '_parenthesis_function_level' }) {
$self->_set_level(pop(@{ $self->{ '_level_parenthesis_function' } }) || 0, $token, $last);
$self->_over($token,$last) if (!$self->{ '_is_in_create' } && !$self->{ '_is_in_operator' } && !$self->{ '_is_in_alter' } and uc($self->_next_token($token,$last)||'') ne 'LOOP');
}
}
$self->{ '_is_in_function' } = 0 if (!$self->{ '_parenthesis_function_level' });
$self->{ '_is_in_cast' } = 0 if ((not defined $self->_next_token or $self->_next_token !~ /^(WITH|WITHOUT)$/i) and (!$self->{ '_parenthesis_level' } or !$self->{ '_parenthesis_function_level' }));
if (!$self->{ '_parenthesis_level' } and $self->{ '_is_in_sub_query' } and defined $self->_next_token and (!$self->{ '_is_in_order_by' } or $self->_next_token =~ /^(FROM|GROUP)$/i)) {
$self->{ '_is_in_sub_query' }--;
$self->_back($token, $last);
}
if ($self->{ '_is_in_value' }) {
$self->{ '_parenthesis_level_value' }-- if ($self->{ '_parenthesis_level_value' });
}
}
elsif ( $token eq '(')
{
$self->{ '_parenthesis_filter_level' }++ if ($self->{ '_is_in_filter' });
$self->{ '_parenthesis_with_level' }++ if ($self->{ '_is_in_with' });
if ($self->{ '_is_in_function' }) {
$self->{ '_parenthesis_function_level' }++;
push(@{ $self->{ '_level_parenthesis_function' } } , $self->{ '_level' }) if ($self->{ '_parenthesis_function_level' } == 1);
} else {
if (!$self->{ '_parenthesis_level' } && $self->{ '_is_in_from' }) {
push(@{ $self->{ '_level_parenthesis' } } , $self->{ '_level' });
}
$self->{ '_parenthesis_level' }++;
if ($self->{ '_is_in_value' }) {
$self->{ '_parenthesis_level_value' }++;
}
}
if (defined $self->_next_token and $self->_next_token =~ /^(SELECT|WITH)$/i) {
$self->{ '_is_in_sub_query' }++ if (defined $last and uc($last) ne 'AS');
}
}
####
# Control case where we have to add a newline, go back and
# reset indentation after the last ) in the WITH statement
####
if ($token =~ /^WITH$/i and (!defined $last or ($last ne ')' and $self->_next_token !~ /^(TIME|FUNCTION)/i)))
{
if (!$self->{ '_is_in_partition' } and !$self->{ '_is_in_publication' } and !$self->{ '_is_in_policy' })
{
$self->{ '_is_in_with' } = 1 if (!$self->{ '_is_in_using' } and !$self->{ '_is_in_materialized' }
and uc($self->_next_token) ne 'ORDINALITY' and uc($last) ne 'START');
$self->{ 'no_break' } = 1 if (uc($self->_next_token) eq 'ORDINALITY');
}
$self->{ '_is_in_materialized' } = 0;
}
elsif ($token =~ /^WITH$/i && uc($self->_next_token) eq 'ORDINALITY')
{
$self->{ 'no_break' } = 1;
}
elsif ($token =~ /^(AS|IS)$/i && defined $self->_next_token && $self->_next_token =~ /^(NOT|\()$/)
{
$self->{ '_is_in_materialized' } = 0;
$self->{ '_is_in_with' }++ if ($self->{ '_is_in_with' } == 1);
}
elsif ($self->{ '_is_in_create' } && $token =~ /^AS$/i && defined $self->_next_token && uc($self->_next_token) eq 'SELECT')
{
$self->{ '_is_in_materialized' } = 0;
$self->{ '_is_in_create' } = 0;
}
elsif ( $token eq '[' )
{
$self->{ '_is_in_array' }++;
}
elsif ( $token eq ']' )
{
$self->{ '_is_in_array' }-- if ($self->{ '_is_in_array' });
}
elsif ( $token eq ')' )
{
$self->{ '_has_order_by' } = 0;
if ($self->{ '_is_in_distinct' }) {
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->{ '_is_in_distinct' } = 0;
$last = $self->_set_last($token, $last);
next;
}
$self->{ '_is_in_generated' } = 0 if ($self->{ '_is_in_create' } and $self->{ '_parenthesis_level' } == 1);
$self->{ '_is_in_using' } = 0 if ($self->{ '_is_in_using' } and !$self->{ '_parenthesis_level' } and !$self->{ '_is_in_policy' });
if (defined $self->_next_token and $self->_next_token !~ /^(AS|WITH|,)$/i
and (!$self->_is_comment($self->_next_token) or ($#{$self->{ '_tokens' }} >= 1 and $self->{ '_tokens' }[1] ne ','))
and !$self->{ '_parenthesis_with_level' })
{
$self->{ '_is_in_with' } = 0;
}
if ($self->{ '_is_in_create' } > 1 and defined $self->_next_token
and uc($self->_next_token) eq 'AS' and !$self->{ '_is_in_with'})
{
$self->{ '_is_in_materialized' } = 0;
$self->_new_line($token,$last) if ($last ne '(' and !$self->{ '_is_in_create' });
if ($self->{ '_is_in_returns_table' } and !$self->{ '_parenthesis_level' })
{
$self->{ '_is_in_returns_table' } = 0;
$self->_new_line($token,$last);
$self->_back($token, $last);
$self->_add_token( $token, $last );
$last = $self->_set_last($token, $last);
next;
} else {
$self->_over($token, $last) if ($self->{ '_is_in_procedure' });
}
}
if (($self->{ '_is_in_with' } > 1 || $self->{ '_is_in_operator' })
&& !$self->{ '_parenthesis_level' } && !$self->{ '_parenthesis_with_level' }
&& !$self->{ '_is_in_alter' } && !$self->{ '_is_in_policy' })
{
$self->_new_line($token,$last) if (!$self->{ '_is_in_operator' } ||
(!$self->{ '_is_in_drop' } and $self->_next_token eq ';'));
if (!$self->{ '_is_in_operator' })
{
$self->_set_level($self->_pop_level($token, $last), $token, $last);
$self->_back($token, $last);
}
$self->_add_token( $token );
if (!$self->{ '_is_in_operator' }) {
$self->_reset_level($token, $last);
}
if ($self->{ '_is_in_with' })
{
if (defined $self->_next_token && $self->_next_token eq ',') {
$self->{ '_is_in_with' } = 1;
} else {
$self->{ '_is_in_with' } = 0;
}
}
$last = $self->_set_last($token, $last);
next;
}
}
elsif (defined $self->_next_token && $self->_next_token eq '(')
{
$self->{ '_is_in_filter' } = 1 if (uc($token) eq 'FILTER');
$self->{ '_is_in_grouping' } = 1 if ($token =~ /^(GROUPING|ROLLUP)$/i);
}
elsif ( uc($token) eq 'PASSING' and defined $self->_next_token && uc($self->_next_token) eq 'BY')
{
$self->{ '_has_order_by' } = 1;
}
# Explain need indentation in option list
if ( uc($token) eq 'EXPLAIN' )
{
$self->{ '_is_in_explain' } = 1;
}
elsif ( uc($token) eq 'OVERLAPS' )
{
$self->{ '_is_in_overlaps' } = 1;
}
####
# Set the current kind of statement parsed
####
if ($token =~ /^(FUNCTION|PROCEDURE|SEQUENCE|INSERT|DELETE|UPDATE|SELECT|RAISE|ALTER|GRANT|REVOKE|COMMENT|DROP|RULE|COMMENT|LOCK)$/i) {
my $k_stmt = uc($1);
$self->{ '_is_in_explain' } = 0;
$self->{ '_is_in_where' } = 0;
# Set current statement with taking care to exclude of SELECT ... FOR UPDATE
# statement and ON CONFLICT DO UPDATE.
if ($k_stmt ne 'UPDATE' or (defined $self->_next_token and $self->_next_token ne ';' and $self->_next_token ne ')' and (not defined $last or $last !~ /^(DO|SHARE)$/i)))
{
if ($k_stmt !~ /^(UPDATE|DELETE)$/i || !$self->{ '_is_in_create' })
{
if ($self->{ '_current_sql_stmt' } !~ /^(GRANT|REVOKE)$/i and !$self->{ '_is_in_trigger' } and !$self->{ '_is_in_operator' } and !$self->{ '_is_in_alter' })
{
if ($k_stmt ne 'COMMENT' or $self->_next_token =~ /^(ON|IS)$/i)
{
$self->{ '_current_sql_stmt' } = $k_stmt if (not defined $last or uc($last) ne 'WITH');
}
}
}
}
}
####
# Mark that we are in CREATE statement that need newline
# after a comma in the parameter, declare or column lists.
####
if ($token =~ /^(FUNCTION|PROCEDURE)$/i and $self->{ '_is_in_create' } and !$self->{'_is_in_trigger'}) {
$self->{ '_is_in_create_function' } = 1;
} elsif ($token =~ /^(FUNCTION|PROCEDURE)$/i and defined $last and uc($last) eq 'DROP') {
$self->{ '_is_in_drop_function' } = 1;
} elsif ($token =~ /^(FUNCTION|PROCEDURE)$/i and $self->{'_is_in_trigger'}) {
$self->{ '_is_in_index' } = 1;
}
if ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token !~ /^(EVENT|UNIQUE|INDEX|EXTENSION|TYPE|PUBLICATION|OPERATOR|RULE|CONVERSION|DOMAIN)$/i)
{
if ($self->_next_token =~ /^SCHEMA$/i) {
$self->{ '_is_in_create_schema' } = 1;
}
elsif ($self->{ '_is_in_create_schema' })
{
# we are certainly in a create schema statement
$self->_new_line($token,$last);
$self->{ '_level' } = 1;
$self->{ '_is_in_create_schema' }++;
}
$self->{ '_is_in_create' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^RULE$/i) {
$self->{ '_is_in_rule' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^EVENT$/i) {
$self->{ '_is_in_trigger' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^TYPE$/i) {
$self->{ '_is_in_type' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^DOMAIN$/i) {
$self->{ '_is_in_domain' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^PUBLICATION$/i) {
$self->{ '_is_in_publication' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^CONVERSION$/i) {
$self->{ '_is_in_conversion' } = 1;
} elsif ($token =~ /^(CREATE|DROP)$/i and defined $self->_next_token && $self->_next_token =~ /^OPERATOR$/i) {
$self->{ '_is_in_operator' } = 1;
$self->{ '_is_in_drop' } = 1 if ($token =~ /^DROP$/i);
} elsif ($token =~ /^ALTER$/i) {
$self->{ '_is_in_alter' }++;
} elsif ($token =~ /^DROP$/i){
$self->{ '_is_in_drop' } = 1;
} elsif ($token =~ /^VIEW$/i and $self->{ '_is_in_create' }) {
$self->{ '_is_in_index' } = 1;
$self->{ '_is_in_create' } = 0;
} elsif ($token =~ /^STATISTICS$/i and $self->{ '_is_in_create' }) {
$self->{ '_is_in_statistics' } = 1;
$self->{ '_is_in_create' } = 0;
} elsif ($token =~ /^CAST$/i and defined $self->_next_token and $self->_next_token eq '(') {
$self->{ '_is_in_cast' } = 1;
} elsif ($token =~ /^AGGREGATE$/i and $self->{ '_is_in_create' }) {
$self->{ '_is_in_aggregate' } = 1;
$self->{ '_has_order_by' } = 1;
} elsif ($token =~ /^EVENT$/i and defined $self->_next_token && $self->_next_token =~ /^TRIGGER$/i) {
$self->_over($token, $last);
$self->{ '_is_in_index' } = 1;
} elsif ($token =~ /^CREATE$/i and defined $self->_next_token && $self->_next_token =~ /^INDEX|UNIQUE/i)
{
if ($self->{ '_is_in_create_schema' })
{
$self->_new_line($token,$last);
$self->{ '_level' } = 1;
$self->{ '_is_in_create_schema' }++;
}
}
if ($self->{ '_is_in_using' } and defined $self->_next_token and $self->_next_token =~ /^(OPERATOR|AS)$/i) {
$self->{ '_is_in_using' } = 0;
}
if ($token =~ /^ALTER$/i and $self->{ '_is_in_alter' } > 1) {
$self->_new_line($token,$last);
$self->_over($token, $last) if ($last ne ',');
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
# Special case for MySQL delimiter
if ( uc($token) eq 'DELIMITER' && defined $self->_next_token &&
($self->_next_token eq '//' or $self->_next_token eq '$$'))
{
$self->{ '_mysql_delimiter' } = $self->_next_token;
}
elsif (uc($token) eq 'DELIMITER' && defined $self->_next_token &&
$self->_next_token eq ';')
{
$self->{ '_mysql_delimiter' } = '';
}
# case of the delimiter alone
if ($self->{ '_mysql_delimiter' } && $token eq $self->{ '_mysql_delimiter' })
{
$self->{ 'content' } =~ s/\n\n$/\n/s;
$self->_add_token( $token );
$self->_new_line($token,$last);
$last = $self->_set_last(';', $last);
next
}
####
# Mark that we are in a CALL statement to remove any new line
####
if ($token =~ /^CALL$/i) {
$self->{ '_is_in_call' } = 1;
}
# Increment operator tag to add newline in alter operator statement
if (($self->{ '_is_in_alter' } or uc($last) eq 'AS') and uc($token) eq 'OPERATOR') {
$self->_new_line($token,$last) if (uc($last) eq 'AS' and uc($token) eq 'OPERATOR');
$self->{ '_is_in_operator' }++;
}
####
# Mark that we are in index/constraint creation statement to
# avoid inserting a newline after comma and AND/OR keywords.
# This also used in SET statement taking care that we are not
# in update statement. CREATE statement are not subject to this rule
####
if (! $self->{ '_is_in_create' } and $token =~ /^(INDEX|PRIMARY|CONSTRAINT)$/i) {
$self->{ '_is_in_index' } = 1 if ($last =~ /^(ALTER|CREATE|UNIQUE|USING|ADD)$/i);
} elsif (! $self->{ '_is_in_create' } and uc($token) eq 'SET') {
$self->{ '_is_in_index' } = 1 if ($self->{ '_current_sql_stmt' } ne 'UPDATE');
} elsif ($self->{ '_is_in_create' } and (uc($token) eq 'UNIQUE' or ($token =~ /^(PRIMARY|FOREIGN)$/i and uc($self->_next_token) eq 'KEY'))) {
$self->{ '_is_in_constraint' } = 1;
}
# Same as above but for ALTER FUNCTION/PROCEDURE/SEQUENCE or when
# we are in a CREATE FUNCTION/PROCEDURE statement
elsif ($token =~ /^(FUNCTION|PROCEDURE|SEQUENCE)$/i and !$self->{'_is_in_trigger'}) {
$self->{ '_is_in_index' } = 1 if (uc($last) eq 'ALTER' and !$self->{ '_is_in_operator' } and !$self->{ '_is_in_alter' });
if ($token =~ /^FUNCTION$/i && ($self->{ '_is_in_create' } || $self->{ '_current_sql_stmt' } eq 'COMMENT')) {
$self->{ '_is_in_index' } = 1 if (!$self->{ '_is_in_operator' });
} elsif ($token =~ /^PROCEDURE$/i && $self->{ '_is_in_create' }) {
$self->{ '_is_in_index' } = 1;
$self->{ '_is_in_procedure' } = 1;
}
}
# Desactivate index like formatting when RETURN(S) keyword is found
elsif ($token =~ /^(RETURN|RETURNS)$/i)
{
$self->{ '_is_in_index' } = 0;
if (uc($token) eq 'RETURNS' and uc ($self->_next_token()) eq 'TABLE') {
$self->{ '_is_in_returns_table' } = 1;
}
}
elsif ($token =~ /^AS$/i)
{
$self->{ '_is_in_materialized' } = 0;
if ( !$self->{ '_is_in_index' } and $self->{ '_is_in_from' } and $last eq ')' and uc($token) eq 'AS' and $self->_next_token() eq '(') {
$self->{ '_is_in_index' } = 1;
} else {
$self->{ '_is_in_index' } = 0;
}
$self->{ '_is_in_block' } = 1 if ($self->{ '_is_in_procedure' });
$self->{ '_is_in_over' } = 0;
}
if ($token =~ /^(BEGIN|DECLARE)$/i)
{
$self->{ '_is_in_create' }-- if ($self->{ '_is_in_create' });
if (uc($token) eq 'BEGIN')
{
push( @{ $self->{ '_begin_level' } }, ($#{ $self->{ '_begin_level' } } < 0) ? 0 : $self->{ '_level' } );
}
# $self->_add_token( $token );
# $self->_new_line($token,$last);
# $self->_over($token,$last);
# $last = $self->_set_last($token, $last);
# next;
}
####
# Mark statements that use string_agg() or group_concat() function
# as statement that can have an ORDER BY clause inside the call to
# prevent applying order by formatting.
####
if ($token =~ /^(string_agg|group_concat|array_agg|percentile_cont)$/i) {
$self->{ '_has_order_by' } = 1;
} elsif ( $token =~ /^(?:GENERATED)$/i and $self->_next_token =~ /^(ALWAYS|BY)$/i ) {
$self->{ '_is_in_generated' } = 1;
} elsif ( $token =~ /^(?:TRUNCATE)$/i ) {
$self->{ 'no_break' } = 1;
} elsif ( uc($token) eq 'IDENTITY' ) {
$self->{ '_has_order_by' } = 0;
$self->{ 'no_break' } = 0;
$self->{ '_is_in_generated' } = 0;
} elsif ( $self->{ '_has_order_by' } and uc($token) eq 'ORDER' and $self->_next_token =~ /^BY$/i) {
$self->_add_token( $token, $last );
$last = $self->_set_last($token, $last);
next;
} elsif ($self->{ '_has_order_by' } and uc($token) eq 'BY') {
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^OVER$/i)
{
$self->_add_token( $token );
$self->{ '_is_in_over' } = 1;
$self->{ '_has_order_by' } = 1;
$last = $self->_set_last($token, $last);
next;
}
# Fix case where we don't knwon if we are outside a SQL function
if (defined $last and uc($last) eq 'AS' and defined $self->_next_token and $self->_next_token eq ';'
and $self->{ '_is_in_create_function' }) {
$self->{ '_is_in_create_function' } = 0;
}
####
# Set function code delimiter, it can be any string found after
# the AS keyword in function or procedure creation code
####
# Toogle _fct_code_delimiter to force next token to be stored as the function code delimiter
if (uc($token) eq 'AS' and (!$self->{ '_fct_code_delimiter' } || $self->_next_token =~ /CODEPART/)
and $self->{ '_current_sql_stmt' } =~ /^(FUNCTION|PROCEDURE)$/i)
{
if ($self->{ '_is_in_create' } and !$self->{ '_is_in_with' } and !$self->{ '_is_in_cast' }
and $self->_next_token !~ /^(IMPLICIT|ASSIGNMENT)$/i)
{
$self->_new_line($token,$last);
$self->_add_token( $token );
$self->_reset_level($token, $last) if ($self->_next_token !~ /CODEPARTB/);
} else {
$self->_add_token( $token );
}
if ($self->_next_token !~ /(CODEPART|IMPLICIT|ASSIGNMENT)/ || $self->_next_token =~ /^'/)
{
if (!$self->{ '_is_in_cast' } and $self->{ '_is_in_create' })
{
# extract potential code joined with the code separator
if ($self->{ '_tokens' }->[0] =~ s/^'(.)/$1/)
{
$self->{ '_tokens' }->[0] =~ s/[;\s]*'$//;
my @tmp_arr = $self->tokenize_sql($self->{ '_tokens' }->[0]);
push(@tmp_arr, ";", "'");
shift(@{ $self->{ '_tokens' } });
unshift(@{ $self->{ '_tokens' } }, "'", @tmp_arr);
}
$self->{ '_fct_code_delimiter' } = '1';
}
}
$self->{ '_is_in_create' } = 0;
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^(INSTEAD|ALSO)$/i and defined $last and uc($last) eq 'DO')
{
$self->_add_token( $token );
$self->_new_line($token,$last);
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^DO$/i and defined $self->_next_token and $self->_next_token =~ /^(INSTEAD|ALSO|UPDATE|NOTHING)$/i)
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^DO$/i and !$self->{ '_fct_code_delimiter' } and $self->_next_token =~ /^\$[^\s]*/)
{
@{ $self->{ '_begin_level' } } = ();
$self->{ '_fct_code_delimiter' } = '1';
$self->{ '_is_in_create_function' } = 1;
$self->_new_line($token,$last) if ($self->{ 'content' } !~ /\n$/s);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
# Store function code delimiter
if ($self->{ '_fct_code_delimiter' } eq '1')
{
if ($self->_next_token =~ /CODEPART/) {
$self->{ '_fct_code_delimiter' } = '0';
} elsif ($token =~ /^'.*'$/) {
$self->{ '_fct_code_delimiter' } = "'";
} else {
$self->{ '_fct_code_delimiter' } = $token;
}
$self->_add_token( $token );
if (!$self->{ '_is_in_create_function' } or $self->_next_token ne ','
or $self->{ '_tokens' }[1] !~ /KEYWCONST/) {
$self->_new_line($token,$last);
}
if (defined $self->_next_token
and $self->_next_token !~ /^(DECLARE|BEGIN)$/i) {
$self->_over($token,$last);
$self->{ '_language_sql' } = 1;
}
if ($self->{ '_fct_code_delimiter' } eq "'")
{
$self->{ '_is_in_block' } = -1;
$self->{ '_is_in_exception' } = 0;
$self->_reset_level($token, $last) if ($self->_next_token eq ';');
$self->{ '_fct_code_delimiter' } = '';
$self->{ '_current_sql_stmt' } = '';
$self->{ '_is_in_procedure' } = 0;
$self->{ '_is_in_function' } = 0;
$self->{ '_is_in_create_function' } = 0;
$self->{ '_language_sql' } = 0;
}
$last = $self->_set_last($token, $last);
next;
}
# With SQL language the code delimiter can be include with the keyword, try to detect and fix it
if ($self->{ '_fct_code_delimiter' } and $token =~ s/(.)\Q$self->{ '_fct_code_delimiter' }\E$/$1/)
{
unshift(@{ $self->{ '_tokens' } }, $self->{ '_fct_code_delimiter' });
}
# Desactivate the block mode when code delimiter is found for the second time
if ($self->{ '_fct_code_delimiter' } && $token eq $self->{ '_fct_code_delimiter' })
{
$self->{ '_is_in_block' } = -1;
$self->{ '_is_in_exception' } = 0;
$self->{ '_is_in_create_function' } = 0;
$self->_reset_level($token, $last);
$self->{ '_fct_code_delimiter' } = '';
$self->{ '_current_sql_stmt' } = '';
$self->{ '_language_sql' } = 0;
$self->_new_line($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
####
# Mark when we are parsing a DECLARE or a BLOCK section. When
# entering a BLOCK section store the current indentation level
####
if (uc($token) eq 'DECLARE' and $self->{ '_is_in_create_function' })
{
$self->{ '_is_in_block' } = -1;
$self->{ '_is_in_exception' } = 0;
$self->{ '_is_in_declare' } = 1;
$self->_reset_level($token, $last);
$self->_new_line($token,$last);
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_over($token,$last);
$last = $self->_set_last($token, $last);
$self->{ '_is_in_create_function' } = 0;
next;
}
elsif ( uc($token) eq 'BEGIN' )
{
$self->{ '_is_in_declare' } = 0;
if ($self->{ '_is_in_block' } == -1) {
$self->_reset_level($token, $last);
}
$self->_new_line($token,$last);
$self->_add_token( $token );
if (defined $self->_next_token && $self->_next_token !~ /^(WORK|TRANSACTION|ISOLATION|;)$/i) {
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->{ '_is_in_block' }++;
# Store current indent position to print END at the right level
$self->_push_level($self->{ '_level' }, $token, $last);
}
$self->{ '_is_in_work' }++ if (!$self->{ 'no_grouping' } and defined $self->_next_token && $self->_next_token =~ /^(WORK|TRANSACTION|ISOLATION|;)$/i);
$last = $self->_set_last($token, $last);
next;
}
elsif ( $token =~ /^(COMMIT|ROLLBACK)$/i and (not defined $last or uc($last) ne 'ON') and !$self->{ '_is_in_procedure' } )
{
$self->{ '_is_in_work' } = 0;
$self->{ '_is_in_declare' } = 0;
$self->{ '_is_in_create_function' } = 0;
$self->_new_line($token,$last);
$self->_set_level($self->_pop_level($token, $last), $token, $last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
@{ $self->{ '_begin_level' } } = ();
next;
}
elsif ( $token =~ /^(COMMIT|ROLLBACK)$/i and defined $self->_next_token and $self->_next_token eq ';' and $self->{ '_is_in_procedure' } )
{
$self->_new_line($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
@{ $self->{ '_begin_level' } } = ();
next;
}
elsif ( $token =~ /^FETCH$/i and defined $last and $last eq ';')
{
$self->_new_line($token,$last);
$self->_back($token, $last) if ($self->{ '_is_in_block' } == -1);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
$self->{ '_is_in_fetch' } = 1;
next;
}
####
# Special case where we want to add a newline into ) AS (
####
if (uc($token) eq 'AS' and $last eq ')' and $self->_next_token eq '(')
{
$self->_new_line($token,$last);
}
# and before RETURNS with increasing indent level
elsif (uc($token) eq 'RETURNS')
{
$self->_new_line($token,$last);
$self->_over($token,$last) if (uc($self->_next_token) ne 'NULL');
}
# and before WINDOW
elsif (uc($token) eq 'WINDOW')
{
$self->_new_line($token,$last);
$self->_set_level($self->_pop_level($token, $last), $token, $last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
$self->{ '_has_order_by' } = 1;
next;
}
# Treated DISTINCT as a modifier of the whole select clause, not only the first column only
if (uc($token) eq 'ON' && defined $last && uc($last) eq 'DISTINCT')
{
$self->{ '_is_in_distinct' } = 1;
$self->_over($token,$last);
}
elsif (uc($token) eq 'DISTINCT' && defined $last && uc($last) eq 'SELECT' && defined $self->_next_token && $self->_next_token !~ /^ON$/i)
{
$self->_add_token( $token );
$self->_new_line($token,$last) if (!$self->{'wrap_after'});
$self->_over($token,$last);
$last = $self->_set_last($token, $last);
next;
}
if ( $rule )
{
$self->_process_rule( $rule, $token );
}
elsif ($token =~ /^(LANGUAGE|SECURITY|COST)$/i && !$self->{ '_is_in_alter' } && !$self->{ '_is_in_drop' } )
{
@{ $self->{ '_begin_level' } } = ();
$self->_new_line($token,$last) if (uc($token) ne 'SECURITY' or (defined $last and uc($last) ne 'LEVEL'));
$self->_add_token( $token );
}
elsif ($token =~ /^PARTITION$/i && !$self->{ '_is_in_over' } && defined $last && $last ne '(')
{
$self->{ '_is_in_partition' } = 1;
if ($self->{ '_is_in_create' } && defined $last and $last eq ')')
{
$self->_new_line($token,$last);
$self->_set_level($self->_pop_level($token, $last), $token, $last) if ($self->{ '_level' });
$self->_add_token( $token );
}
else
{
$self->_add_token( $token );
}
}
elsif ($token =~ /^POLICY$/i)
{
$self->{ '_is_in_policy' } = 1;
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^TRUNCATE$/i && $self->_next_token !~ /^(TABLE|ONLY|,)$/i)
{
$self->{ '_is_in_truncate' } = 1;
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
my $str_tmp = join(' ', @{ $self->{ '_tokens' } });
$str_tmp =~ s/;.*//s;
if ($str_tmp =~ / , /s)
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->{ '_is_in_truncate' } = 2;
}
next;
}
elsif ($token =~ /^(TABLE|ONLY)$/i && uc($last) eq 'TRUNCATE')
{
$self->{ '_is_in_truncate' } = 1;
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
my $str_tmp = join(' ', @{ $self->{ '_tokens' } });
$str_tmp =~ s/;.*//s;
if ($str_tmp =~ / , /s)
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->{ '_is_in_truncate' } = 2;
}
next;
}
elsif ($token =~ /^(RESTART|CASCADE)$/i && $self->{ '_is_in_truncate' } == 2)
{
$self->_new_line($token,$last);
$self->_back($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^TRIGGER$/i and defined $last and $last =~ /^(CREATE|CONSTRAINT|REPLACE)$/i)
{
$self->{ '_is_in_trigger' } = 1;
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^(BEFORE|AFTER|INSTEAD)$/i and $self->{ '_is_in_trigger' })
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^EXECUTE$/i and ($self->{ '_is_in_trigger' } or (defined $last and uc($last) eq 'AS')))
{
$self->_new_line($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ( $token eq '(' )
{
if ($self->{ '_is_in_aggregate' } && defined $self->_next_token and ($self->_is_keyword($self->_next_token) or $self->_is_sql_keyword($self->_next_token)) and uc($self->_next_token) ne 'VARIADIC') {
$self->{ '_is_in_aggregate' } = 0;
$self->{ '_has_order_by' } = 0;
}
$self->{ '_is_in_create' }++ if ($self->{ '_is_in_create' });
$self->{ '_is_in_constraint' }++ if ($self->{ '_is_in_constraint' });
$self->_add_token( $token, $last );
if (defined $self->_next_token and uc($self->_next_token) eq 'SELECT')
{
$self->{ '_is_in_cast' } = 0;
$self->{ '_is_subquery' }++;
}
if (defined $self->_next_token and $self->_next_token eq ')' and !$self->{ '_is_in_create' }) {
$last = $self->_set_last($token, $last);
next;
}
if ( !$self->{ '_is_in_index' } && !$self->{ '_is_in_publication' }
&& !$self->{ '_is_in_distinct' } && !$self->{ '_is_in_filter' }
&& !$self->{ '_is_in_grouping' } && !$self->{ '_is_in_partition' }
&& !$self->{ '_is_in_over' } && !$self->{ '_is_in_trigger' }
&& !$self->{ '_is_in_policy' } && !$self->{ '_is_in_aggregate' }
&& !$self->{ 'no_break' } && !$self->{ '_is_in_generated' }
) {
if (uc($last) eq 'AS' || $self->{ '_is_in_create' } == 2 || uc($self->_next_token) eq 'CASE')
{
$self->_new_line($token,$last) if ((!$self->{'_is_in_function'} or $self->_next_token =~ /^CASE$/i) and $self->_next_token ne ')' and $self->_next_token !~ /^(PARTITION|ORDER)$/i);
}
if ($self->{ '_is_in_with' } == 1 or $self->{ '_is_in_explain' }) {
$self->_over($token,$last);
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' });
$last = $self->_set_last($token, $last) if (!$self->{ '_is_in_explain' } || $self->{ 'wrap_after' });
next;
}
if (!$self->{ '_is_in_if' } and !$self->{ '_is_in_alter' } and (!$self->{ '_is_in_function' } or $last ne '('))
{
$self->_over($token,$last) if ($self->{ '_is_in_operator' } <= 2 && $self->{ '_is_in_create' } <= 2);
if (!$self->{ '_is_in_function' } and !$self->_is_type($self->_next_token))
{
if ($self->{ '_is_in_operator' } == 1) {
$self->_new_line($token,$last);
$self->{ '_is_in_operator' }++;
} elsif ($self->{ '_is_in_type' }) {
$self->_new_line($token,$last);
}
}
$last = $self->_set_last($token, $last);
}
if ($self->{ '_is_in_type' } == 1) {
$last = $self->_set_last($token, $last);
next;
}
}
if ($self->{ 'format_type' } && $self->{ '_current_sql_stmt' } =~ /(FUNCTION|PROCEDURE)/i
&& $self->{ '_is_in_create' } == 2
&& (not defined $self->_next_token or $self->_next_token ne ')')
) {
$self->_over($token,$last) if ($self->{ '_is_in_block' } < 0);
$self->_new_line($token,$last);
next;
}
}
elsif ( $token eq ')' )
{
if (defined $self->_next_token)
{
my $next = quotemeta($self->_next_token) || 'SELECT';
if (!$self->{ '_parenthesis_level' } and defined $self->_next_token
and $self->_is_keyword($self->_next_token) or (
!grep(/^$next$/, %{$self->{ 'dict' }->{ 'symbols' }})
)
)
{
$self->{ '_is_in_where' } = 0;
}
}
if ($self->{ '_is_in_constraint' } and defined $self->_next_token
and ($self->_next_token eq ',' or $self->_next_token eq ')')) {
$self->{ '_is_in_constraint' } = 0;
} elsif ($self->{ '_is_in_constraint' }) {
$self->{ '_is_in_constraint' }--;
}
# Case of CTE and explain
if ($self->{ '_is_in_with' } == 1 || $self->{ '_is_in_explain' })
{
$self->_back($token, $last);
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' } && !$self->{ '_is_in_overlaps' });
$self->_add_token( $token );
$last = $self->_set_last($token, $last) if ($token ne ')' or uc($self->_next_token) ne 'AS');
$self->{ '_is_in_explain' } = 0;
next;
}
if ( ($self->{ 'format_type' } && $self->{ '_current_sql_stmt' } =~ /(FUNCTION|PROCEDURE)/i
&& $self->{ '_is_in_create' } == 2) || (defined $self->_next_token and uc($self->_next_token) eq 'INHERITS')
)
{
$self->_back($token, $last) if ($self->{ '_is_in_block' } < 0);
$self->_new_line($token,$last) if (defined $last && $last ne '(');
}
if ($self->{ '_is_in_index' } || $self->{ '_is_in_alter' }
|| $self->{ '_is_in_partition' } || $self->{ '_is_in_policy' }
|| (defined $self->_next_token and $self->_next_token =~ /^OVER$/i)
) {
$self->_add_token( '' );
$self->_add_token( $token );
$self->{ '_is_in_over' } = 0 if (!$self->{ '_parenthesis_level' });
$last = $self->_set_last($token, $last);
$self->{ '_is_in_create' }-- if ($self->{ '_is_in_create' });
next;
}
if (defined $self->_next_token && $self->_next_token !~ /FILTER/i)
{
my $add_nl = 0;
$add_nl = 1 if ($self->{ '_is_in_create' } > 1
and defined $last and $last ne '('
and !$self->{ '_is_in_cast' }
and (not defined $self->_next_token or $self->_next_token =~ /^(TABLESPACE|PARTITION|AS|;)$/i or ($self->_next_token =~ /^ON$/i and !$self->{ '_parenthesis_level' }))
);
$add_nl = 1 if ($self->{ '_is_in_type' } == 1
and $self->_next_token !~ /^AS$/i
and (not defined $self->_next_token or $self->_next_token eq ';')
);
$add_nl = 1 if ($self->{ '_current_sql_stmt' } ne 'INSERT'
and !$self->{ '_is_in_function' }
and (defined $self->_next_token
and $self->_next_token =~ /^(SELECT|WITH)$/i)
and $self->{ '_tokens' }[1] !~ /^(ORDINALITY|FUNCTION)$/i
and ($self->{ '_is_in_create' } or $last ne ')' and $last ne ']')
and (uc($self->_next_token) ne 'WITH' or uc($self->{ '_tokens' }->[ 1 ]) !~ /TIME|INOUT/i)
);
$self->_new_line($token,$last) if ($add_nl);
if (!$self->{ '_is_in_grouping' } and !$self->{ '_is_in_trigger' }
and !$self->{ 'no_break' }
and !$self->{ '_is_in_generated' }
and $self->{ '_is_in_create' } <= 2
and $self->_next_token !~ /^LOOP$/i
)
{
$self->_back($token, $last);
}
$self->{ '_is_in_create' }-- if ($self->{ '_is_in_create' });
if ($self->{ '_is_in_type' })
{
$self->_reset_level($token, $last) if ($self->{ '_is_in_block' } == -1 && !$self->{ '_parenthesis_level' });
$self->{ '_is_in_type' }--;
}
}
if (!$self->{ '_parenthesis_level' })
{
$self->{ '_is_in_filter' } = 0;
$self->{ '_is_in_within' } = 0;
$self->{ '_is_in_grouping' } = 0;
$self->{ '_is_in_over' } = 0;
$self->{ '_has_order_by' } = 0;
$self->{ '_is_in_policy' } = 0;
$self->{ '_is_in_aggregate' } = 0;
}
$self->_add_token( $token );
# Do not go further if this is the last token
if (not defined $self->_next_token) {
$last = $self->_set_last($token, $last);
next;
}
# When closing CTE statement go back again
if ( ($self->_next_token =~ /^(?:SELECT|INSERT|UPDATE|DELETE)$/i and !$self->{ '_is_in_policy' })
or ($self->{ '_is_in_with' } and $self->{ '_is_subquery' }
and $self->{ '_is_subquery' } % 2 == 0) ) {
$self->_back($token, $last) if ($self->{ '_current_sql_stmt' } ne 'INSERT'
and (!$self->{ '_parenthesis_level' } or !defined $self->_next_token
or uc($self->_next_token) eq 'AS'
or ($#{$self->{ '_tokens' }} >= 1 and $self->{ '_tokens' }->[ 1 ] eq ',')));
}
$self->{ '_is_subquery' }-- if ($self->{ '_is_subquery' }
and defined $self->_next_token and $#{$self->{ '_tokens' }} >= 1
and (uc($self->_next_token) eq 'AS' or $self->{ '_tokens' }->[ 1 ] eq ','));
if ($self->{ '_is_in_create' } <= 1)
{
my $next_tok = quotemeta($self->_next_token);
$self->_new_line($token,$last)
if (defined $self->_next_token
and $self->_next_token !~ /^(?:AS|IS|THEN|INTO|BETWEEN|ON|IN|FILTER|WITHIN|DESC|ASC|WITHOUT|CASCADE)$/i
and ($self->_next_token !~ /^(AND|OR)$/i or !$self->{ '_is_in_if' })
and $self->_next_token ne ')'
and $self->_next_token !~ /^:/
and $self->_next_token ne ';'
and $self->_next_token ne ','
and $self->_next_token ne '||'
and uc($self->_next_token) ne 'CONCAT'
and ($self->_is_keyword($self->_next_token) or $self->_is_function($self->_next_token))
and $self->{ '_current_sql_stmt' } !~ /^(GRANT|REVOKE)$/
and !exists $self->{ 'dict' }->{ 'symbols' }{ $next_tok }
and !$self->{ '_is_in_over' }
and !$self->{ '_is_in_cast' }
and !$self->{ '_is_in_domain' }
);
}
}
elsif ( $token eq ',' )
{
my $add_newline = 0;
# Format INSERT with multiple values
if ($self->{ '_current_sql_stmt' } eq 'INSERT' and $last eq ')' and $self->_next_token eq '(') {
$self->_new_line($token,$last) if ($self->{ 'comma' } eq 'start');
$self->_add_token( $token );
$self->_new_line($token,$last) if ($self->{ 'comma' } eq 'end');
next;
}
$self->{ '_is_in_constraint' } = 0 if ($self->{ '_is_in_constraint' } == 1);
$self->{ '_col_count' }++ if (!$self->{ '_is_in_function' });
if (($self->{ '_is_in_over' } or $self->{ '_has_order_by' }) and !$self->{ '_parenthesis_level' } and !$self->{ '_parenthesis_function_level' })
{
$self->{ '_is_in_over' } = 0;
$self->{ '_has_order_by' } = 0;
$self->_back($token, $last);
}
$add_newline = 1 if ( !$self->{ 'no_break' }
&& !$self->{ '_is_in_generated' }
&& !$self->{ '_is_in_function' }
&& !$self->{ '_is_in_distinct' }
&& !$self->{ '_is_in_array' }
&& ($self->{ 'comma_break' } || $self->{ '_current_sql_stmt' } ne 'INSERT')
&& ($self->{ '_current_sql_stmt' } ne 'RAISE')
&& ($self->{ '_current_sql_stmt' } !~ /^(FUNCTION|PROCEDURE)$/
|| $self->{ '_fct_code_delimiter' } ne '')
&& !$self->{ '_is_in_where' }
&& !$self->{ '_is_in_drop' }
&& !$self->{ '_is_in_index' }
&& !$self->{ '_is_in_aggregate' }
&& !$self->{ '_is_in_alter' }
&& !$self->{ '_is_in_publication' }
&& !$self->{ '_is_in_call' }
&& !$self->{ '_is_in_policy' }
&& !$self->{ '_is_in_grouping' }
&& !$self->{ '_is_in_partition' }
&& ($self->{ '_is_in_constraint' } <= 1)
&& ($self->{ '_is_in_create' } <= 2)
&& $self->{ '_is_in_operator' } != 1
&& !$self->{ '_has_order_by' }
&& $self->{ '_current_sql_stmt' } !~ /^(GRANT|REVOKE)$/
&& ($self->_next_token !~ /^('$|\s*\-\-)/is or ($self->_next_token !~ /^'$/is and $self->{ 'no_comments' }))
&& !$self->{ '_parenthesis_function_level' }
&& (!$self->{ '_col_count' } or $self->{ '_col_count' } > ($self->{ 'wrap_after' } - 1))
|| ($self->{ '_is_in_with' } and !$self->{ 'wrap_after' })
);
$self->{ '_col_count' } = 0 if ($self->{ '_col_count' } > ($self->{ 'wrap_after' } - 1));
$add_newline = 0 if ($self->{ '_is_in_using' } and $self->{ '_parenthesis_level' });
$add_newline = 0 if ($self->{ 'no_break' });
if ($self->{ '_is_in_with' } >= 1 && !$self->{ '_parenthesis_level' }) {
$add_newline = 1 if (!$self->{ 'wrap_after' });
}
if ($self->{ 'format_type' } && $self->{ '_current_sql_stmt' } =~ /(FUNCTION|PROCEDURE)/i && $self->{ '_is_in_create' } == 2) {
$add_newline = 1;
}
if ($self->{ '_is_in_alter' } && $self->{ '_is_in_operator' } >= 2) {
$add_newline = 1 if (defined $self->_next_token and $self->_next_token =~ /^(OPERATOR|FUNCTION)$/i);
}
$add_newline = 1 if ($self->{ '_is_in_returns_table' });
$self->_new_line($token,$last) if ($add_newline and $self->{ 'comma' } eq 'start');
$self->_add_token( $token );
$add_newline = 0 if ($self->{ '_is_in_value' } and $self->{ '_parenthesis_level_value' });
$add_newline = 0 if ($self->{ '_is_in_function' } or $self->{ '_is_in_statistics' });
$add_newline = 0 if (defined $self->_next_token and !$self->{ 'no_comments' } and $self->_is_comment($self->_next_token));
$add_newline = 0 if (defined $self->_next_token and $self->_next_token =~ /KEYWCONST/ and $#{ $self->{ '_tokens' } } >= 1 and $self->{ '_tokens' }[1] =~ /^(LANGUAGE|STRICT)$/i);
$add_newline = 1 if ($self->{ '_is_in_truncate' });
$self->_new_line($token,$last) if ($add_newline and $self->{ 'comma' } eq 'end' and ($self->{ 'comma_break' } || $self->{ '_current_sql_stmt' } ne 'INSERT'));
}
elsif ( $token eq ';' or $token =~ /^\\(?:g|crosstabview|watch)/ )
{
# statement separator or executing psql meta command (prefix 'g' includes all its variants)
$self->_add_token($token);
next if ($token eq ';' and $self->{ '_is_in_case' } and uc($last) ne 'CASE');
if ($self->{ '_is_in_rule' }) {
$self->_back($token, $last);
}
elsif ($self->{ '_is_in_create' } && $self->{ '_is_in_block' } > -1)
{
$self->_pop_level($token, $last);
}
# Initialize most of statement related variables
$self->{ 'no_break' } = 0;
$self->{ '_is_in_generated' } = 0;
$self->{ '_is_in_where' } = 0;
$self->{ '_is_in_between' } = 0;
$self->{ '_is_in_from' } = 0;
$self->{ '_is_in_join' } = 0;
$self->{ '_is_in_create' } = 0;
$self->{ '_is_in_create_schema' } = 0;
$self->{ '_is_in_alter' } = 0;
$self->{ '_is_in_rule' } = 0;
$self->{ '_is_in_publication' } = 0;
$self->{ '_is_in_call' } = 0;
$self->{ '_is_in_type' } = 0;
$self->{ '_is_in_domain' } = 0;
$self->{ '_is_in_function' } = 0;
$self->{ '_current_function' } = '';
$self->{ '_is_in_prodedure' } = 0;
$self->{ '_is_in_index' } = 0;
$self->{ '_is_in_statistics' } = 0;
$self->{ '_is_in_cast' } = 0;
$self->{ '_is_in_if' } = 0;
$self->{ '_is_in_with' } = 0;
$self->{ '_is_in_overlaps' } = 0;
$self->{ '_has_order_by' } = 0;
$self->{ '_has_over_in_join' } = 0;
$self->{ '_parenthesis_level' } = 0;
$self->{ '_parenthesis_function_level' } = 0;
$self->{ '_is_in_constraint' } = 0;
$self->{ '_is_in_distinct' } = 0;
$self->{ '_is_in_array' } = 0;
$self->{ '_is_in_filter' } = 0;
$self->{ '_parenthesis_filter_level' } = 0;
$self->{ '_is_in_partition' } = 0;
$self->{ '_is_in_over' } = 0;
$self->{ '_is_in_policy' } = 0;
$self->{ '_is_in_truncate' } = 0;
$self->{ '_is_in_trigger' } = 0;
$self->{ '_is_in_using' } = 0;
$self->{ '_and_level' } = 0;
$self->{ '_col_count' } = 0;
$self->{ '_is_in_drop' } = 0;
$self->{ '_is_in_conversion' } = 0;
$self->{ '_is_in_operator' } = 0;
$self->{ '_is_in_explain' } = 0;
$self->{ '_is_in_sub_query' } = 0;
$self->{ '_is_in_fetch' } = 0;
$self->{ '_is_in_aggregate' } = 0;
$self->{ '_is_in_value' } = 0;
$self->{ '_parenthesis_level_value' } = 0;
$self->{ '_parenthesis_with_level' } = 0;
$self->{ '_is_in_returns_table' } = 0;
$self->{ '_has_limit' } = 0;
$self->{ '_not_a_type' } = 0;
$self->{ '_is_subquery' } = 0;
$self->{ '_is_in_order_by' } = 0;
$self->{ '_is_in_materialized' } = 0;
$self->{ '_is_in_drop_function' } = 0;
if ( $self->{ '_insert_values' } )
{
if ($self->{ '_is_in_block' } == -1 and !$self->{ '_is_in_declare' } and !$self->{ '_fct_code_delimiter' }) {
$self->_reset_level($token, $last);
}
elsif ($self->{ '_is_in_block' } == -1 and $self->{ '_current_sql_stmt' } eq 'INSERT' and !$self->{ '_is_in_create' } and !$self->{ '_is_in_create_function' })
{
$self->_back($token, $last);
$self->_pop_level($token, $last);
}
else
{
$self->_set_level($self->_pop_level($token, $last), $token, $last);
}
$self->{ '_insert_values' } = 0;
}
$self->{ '_current_sql_stmt' } = '';
$self->{ 'break' } = "\n" unless ( $self->{ 'spaces' } != 0 );
#$self->_new_line($token,$last) if ($last !~ /^(VALUES|IMPLICIT|ASSIGNMENT)$/i);
$self->_new_line($token,$last) if ($last !~ /^VALUES$/i);
# Add an additional newline after ; when we are not in a function
if ($self->{ '_is_in_block' } == -1 and !$self->{ '_is_in_work' } and !$self->{ '_language_sql' }
and !$self->{ '_is_in_declare' } and uc($last) ne 'VALUES')
{
$self->{ '_new_line' } = 0;
$self->_new_line($token,$last);
$self->{ 'stmt_number' }++;
$self->{ 'content' } .= "-- Statement # $self->{ 'stmt_number' }\n" if ($self->{ 'numbering' } and $#{ $self->{ '_tokens' } } > 0);
}
# End of statement; remove all indentation when we are not in a BEGIN/END block
if (!$self->{ '_is_in_declare' } and $self->{ '_is_in_block' } == -1 and !$self->{ '_fct_code_delimiter' })
{
$self->_reset_level($token, $last);
}
#elsif ((not defined $self->_next_token or $self->_next_token !~ /^INSERT$/) and !$self->{ '_fct_code_delimiter' })
elsif (!$self->{ '_language_sql' } and (not defined $self->_next_token or $self->_next_token !~ /^INSERT$/))
{
if ($#{ $self->{ '_level_stack' } } == -1) {
$self->_set_level(($self->{ '_is_in_declare' }) ? 1 : ($self->{ '_is_in_block' }+1), $token, $last);
} else {
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last);
}
}
$last = $self->_set_last($token, $last);
}
elsif ($token =~ /^FOR$/i)
{
if ($self->{ '_is_in_policy' })
{
$self->_over($token,$last);
$self->_new_line($token,$last);
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
if ($self->_next_token =~ /^(UPDATE|KEY|NO|VALUES)$/i)
{
$self->_back($token, $last) if (!$self->{ '_has_limit' } and ($#{$self->{ '_level_stack' }} == -1
or $self->{ '_level' } > $self->{ '_level_stack' }[-1]));
$self->_new_line($token,$last);
$self->{ '_has_limit' } = 0;
}
elsif ($self->_next_token =~ /^EACH$/ and $self->{ '_is_in_trigger' })
{
$self->_new_line($token,$last);
}
$self->_add_token( $token );
# cover FOR in cursor
$self->_over($token,$last) if (uc($self->_next_token) eq 'SELECT');
$last = $self->_set_last($token, $last);
}
elsif ( $token =~ /^(?:FROM|WHERE|SET|RETURNING|HAVING|VALUES)$/i )
{
if (uc($token) eq 'FROM' and $self->{ '_has_order_by' } and !$self->{ '_parenthesis_level' })
{
$self->_back($token, $last) if ($self->{ '_has_order_by' });
}
$self->{ 'no_break' } = 0;
$self->{ '_col_count' } = 0;
# special cases for create partition statement
if ($token =~ /^VALUES$/i && defined $last and $last =~ /^(FOR|IN)$/i)
{
$self->_add_token( $token );
$self->{ 'no_break' } = 1;
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^FROM$/i && defined $last and uc($last) eq 'VALUES')
{
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
# Case of DISTINCT FROM clause
if ($token =~ /^FROM$/i)
{
if (uc($last) eq 'DISTINCT' || $self->{ '_is_in_fetch' } || $self->{ '_is_in_alter' } || $self->{ '_is_in_conversion' })
{
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
$self->{ '_is_in_from' }++ if (!$self->{ '_is_in_function' } && !$self->{ '_is_in_partition' });
}
if ($token =~ /^WHERE$/i && !$self->{ '_is_in_filter' })
{
$self->_back($token, $last) if ($self->{ '_has_over_in_join' });
$self->{ '_is_in_where' }++;
$self->{ '_is_in_from' }-- if ($self->{ '_is_in_from' });
$self->{ '_is_in_join' } = 0;
$self->{ '_has_over_in_join' } = 0;
}
elsif (!$self->{ '_is_in_function' })
{
$self->{ '_is_in_where' }-- if ($self->{ '_is_in_where' });
}
if ($token =~ /^SET$/i and $self->{ '_is_in_create' })
{
# Add newline before SET statement in function header
$self->_new_line($token,$last) if (not defined $last or $last !~ /^(DELETE|UPDATE)$/i);
}
elsif ($token =~ /^WHERE$/i and $self->{ '_current_sql_stmt' } eq 'DELETE')
{
$self->_new_line($token,$last);
$self->_add_token( $token );
$self->_over($token,$last);
$last = $self->_set_last($token, $last);
$self->{ '_is_in_join' } = 0;
$last = $self->_set_last($token, $last);
next;
}
elsif ($token =~ /^SET$/i and defined $last and uc($last) eq 'UPDATE' and !$self->_is_keyword($self->_next_token()))
{
$self->{ '_is_in_index' } = 0;
$self->{ '_is_in_from' } = 0;
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_over($token,$last);
$last = $self->_set_last($token, $last);
next;
}
elsif ($token !~ /^FROM$/i or (!$self->{ '_is_in_function' } and !$self->{ '_is_in_statistics' }
and $self->{ '_current_sql_stmt' } !~ /(DELETE|REVOKE)/))
{
if (!$self->{ '_is_in_filter' } and ($token !~ /^SET$/i or !$self->{ '_is_in_index' }))
{
$self->_back($token, $last) if ((uc($token) ne 'VALUES' or $self->{ '_current_sql_stmt' } ne 'INSERT' and $last ne "'")and (uc($token) !~ /^WHERE$/i or $self->{'_is_in_with' } < 2 or $self->{ '_level' } > 1));
if (uc($token) eq 'WHERE' and $self->{'_is_in_function' }
and $self->{ '_is_subquery' } <= 2
)
{
$self->_over($token, $last);
}
$self->_new_line($token,$last) if (!$self->{ '_is_in_rule' } and ($last !~ /^DEFAULT$/i or $self->_next_token() ne ';'));
}
}
else
{
if (uc($token) eq 'FROM' and $self->{ '_is_in_sub_query' }
and !grep(/^\Q$last\E$/i, @extract_keywords)
and ($self->{ '_insert_values' } or $self->{ '_is_in_function' })
and (!$self->{ '_is_in_function' } or
!grep(/^$self->{ '_current_function' }$/, @have_from_clause))
)
{
$self->_new_line($token,$last);
$self->_back($token, $last);
}
if (uc($token) eq 'FROM') {
$self->{ '_current_function' } = '';
}
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
if ($token =~ /^VALUES$/i and !$self->{ '_is_in_rule' } and !$self->{ 'comma_break' } and ($self->{ '_current_sql_stmt' } eq 'INSERT' or $last eq '('))
{
$self->_over($token,$last);
if ($self->{ '_current_sql_stmt' } eq 'INSERT' or $last eq '(')
{
$self->{ '_insert_values' } = 1;
$self->_push_level($self->{ '_level' }, $token, $last);
}
}
if ($token =~ /^VALUES$/i and $last eq '(')
{
$self->{ '_is_in_value' } = 1;
}
if (uc($token) eq 'WHERE')
{
$self->_add_token( $token, $last );
$self->{ '_is_in_value' } = 0;
$self->{ '_parenthesis_level_value' } = 0;
}
else
{
$self->_add_token( $token );
}
if ($token =~ /^VALUES$/i and $last eq '(')
{
$self->_over($token,$last);
}
elsif ( $token =~ /^SET$/i && $self->{ '_current_sql_stmt' } eq 'UPDATE' )
{
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' });
$self->_over($token,$last);
}
elsif ( !$self->{ '_is_in_over' } and !$self->{ '_is_in_filter' } and ($token !~ /^SET$/i or $self->{ '_current_sql_stmt' } eq 'UPDATE') )
{
if (defined $self->_next_token and $self->_next_token !~ /\(|;/
and ($self->_next_token !~ /^(UPDATE|KEY|NO)$/i || uc($token) eq 'WHERE'))
{
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' });
$self->_over($token,$last);
}
}
}
# Add newline before INSERT and DELETE if last token was AS (prepared statement)
elsif (defined $last and $token =~ /^(?:INSERT|DELETE|UPDATE)$/i and uc($last) eq 'AS')
{
$self->_new_line($token,$last);
$self->_add_token( $token );
}
elsif ( $self->{ '_current_sql_stmt' } !~ /^(GRANT|REVOKE)$/
and $token =~ /^(?:SELECT|PERFORM|UPDATE|DELETE)$/i
and (!$self->{ '_is_in_policy' } || $self->{ 'format_type' })
)
{
$self->{ 'no_break' } = 0;
if ($token =~ /^(SELECT|UPDATE|DELETE|INSERT)$/i && $self->{ '_is_in_policy' })
{
$self->_over($token,$last);
}
# case of ON DELETE/UPDATE clause in create table statements
if ($token =~ /^(UPDATE|DELETE)$/i && $self->{ '_is_in_create' }) {
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
if ($token =~ /^UPDATE$/i and $last =~ /^(FOR|KEY|DO)$/i)
{
$self->_add_token( $token );
}
elsif (!$self->{ '_is_in_policy' } && $token !~ /^(DELETE|UPDATE)$/i && (!defined $self->_next_token || $self->_next_token !~ /^DISTINCT$/i))
{
$self->_new_line($token,$last) if (!defined $last or $last ne "\\\\");
$self->_add_token( $token );
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' } and (!defined $last or $last ne "\\\\"));
$self->_over($token,$last);
}
else
{
if ($self->{ '_is_in_policy' } > 1) {
$self->_new_line($token,$last);
}
$self->_add_token( $token );
if ($self->{ '_is_in_policy' } > 1) {
$self->_new_line($token,$last);
$self->_over($token,$last);
}
$self->{ '_is_in_policy' }++ if ($self->{ '_is_in_policy' });
}
}
elsif ( $self->{ '_current_sql_stmt' } !~ /^(GRANT|REVOKE)$/
and uc($token) eq 'INSERT'
and $self->{ '_is_in_policy' } && $self->{ 'format_type' })
{
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_over($token,$last);
}
elsif ( $token =~ /^(?:WITHIN)$/i )
{
$self->{ '_is_in_within' } = 1;
$self->{ '_has_order_by' } = 1;
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
elsif ( $token =~ /^(?:GROUP|ORDER|LIMIT|EXCEPTION)$/i or (uc($token) eq 'ON' and uc($self->_next_token()) eq 'CONFLICT'))
{
if ($self->{ 'format_type' } and uc($token) eq 'GROUP' and uc($self->_next_token()) eq 'BY') {
$self->{ 'no_break' } = 1;
}
if (uc($token) eq 'ORDER' and uc($self->_next_token()) eq 'BY') {
$self->{ '_is_in_order_by' } = 1;
} else {
$self->{ '_is_in_order_by' } = 0;
}
$self->{ '_is_in_value' } = 0;
$self->{ '_parenthesis_level_value' } = 0;
if (uc($token) eq 'GROUP' and !defined $last or uc($last) eq 'EXCLUDE') {
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
if (($self->{ '_is_in_within' } && uc($token) eq 'GROUP') || ($self->{ '_is_in_over' } && uc($token) eq 'ORDER')) {
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
next;
}
if ($self->{ '_has_over_in_join' } and uc($token) eq 'GROUP')
{
$self->_back($token, $last);
$self->{ '_has_over_in_join' } = 0;
}
$self->{ '_is_in_join' } = 0;
$self->{ '_has_limit' } = 1 if (uc($token) eq 'LIMIT');
if ($token !~ /^EXCEPTION/i) {
$self->_back($token, $last);
} else {
$self->_set_level($self->_pop_level($token, $last), $token, $last);
}
if (uc($token) ne 'EXCEPTION' or not defined $last or uc($last) ne 'RAISE')
{
# Excluding CREATE/DROP GROUP
if (uc($token) ne 'LIMIT' or !$self->{ '_is_in_create' })
{
$self->_new_line($token,$last) if (!$self->{ '_is_in_function' } and (not defined $last or $last !~ /^(CREATE|DROP)$/));
}
}
# Store current indent position to print END at the right level
if (uc($last) ne 'RAISE' and $token =~ /^EXCEPTION$/i)
{
$self->{ '_is_in_exception' } = 1;
if ($#{ $self->{ '_begin_level' } } >= 0) {
$self->_set_level($self->{ '_begin_level' }[-1], $token, $last);
}
} elsif (uc($last) eq 'RAISE' and $token =~ /^EXCEPTION$/i) {
$self->_push_level($self->{ '_level' }, $token, $last);
$self->_over($token,$last);
}
$self->{ '_is_in_where' }-- if ($self->{ '_is_in_where' });
$self->_add_token( $token );
if ($token =~ /^EXCEPTION$/i && $self->{ '_level' } == 0) {
$self->_over($token,$last);
}
}
elsif ( $token =~ /^(?:BY)$/i and $last !~ /^(?:INCREMENT|OWNED|PARTITION|GENERATED)$/i)
{
$self->_add_token( $token );
$self->{ '_col_count' } = 0 if (defined $last && $last =~ /^(?:GROUP|ORDER)/i);
if (!$self->{ '_has_order_by' } and !$self->{ '_is_in_over' }) {
$self->_new_line($token,$last) if (!$self->{ 'wrap_after' } and !$self->{ '_is_in_function' });
$self->_over($token,$last);
}
}
elsif ( $token =~ /^(?:CASE)$/i and uc($last) ne 'END')
{
if ($self->{ '_is_in_policy' })
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->_add_token( $token );
$self->{ '_is_in_policy' }++;
} else {
$self->_add_token( $token );
}
# Store current indent position to print END at the right level
$self->_push_level($self->{ '_level' }, $token, $last);
# Mark next WHEN statement as first element of a case
# to force indentation only after this element
$self->{ '_first_when_in_case' } = 1;
$self->{ '_is_in_case' }++;
}
elsif ( $token =~ /^(?:WHEN)$/i)
{
if (!$self->{ '_first_when_in_case' } and !$self->{'_is_in_trigger'}
and defined $last and uc($last) ne 'CASE'
)
{
if (!$self->{ '_is_in_exception' }) {
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last) if ($#{ $self->{ '_level_stack' } } >= 0);
} elsif ($#{ $self->{ '_begin_level' }} >= 0) {
$self->_set_level($self->{ '_begin_level' }[-1]+1, $token, $last);
}
}
$self->_new_line($token,$last) if (not defined $last or $last !~ /^(CASE|,|\()$/i );
$self->_add_token( $token );
if (!$self->{ '_is_in_case' } && !$self->{ '_is_in_trigger' }) {
$self->_over($token,$last);
}
$self->{ '_first_when_in_case' } = 0;
}
elsif ( $token =~ /^(?:IF|LOOP)$/i && $self->{ '_current_sql_stmt' } ne 'GRANT')
{
if ($self->{ '_is_in_join' }) {
$self->{ '_is_in_join' } = 0;
$self->_back($token,$last);
$self->_add_token( $token );
} else {
$self->_add_token( $token );
}
$self->{ 'no_break' } = 0;
if (defined $self->_next_token and $self->_next_token !~ /^(EXISTS|;)$/i)
{
if (uc($self->_next_token) ne 'NOT' || uc($self->{ '_tokens' }->[ 1 ]) ne 'EXISTS')
{
$self->_new_line($token,$last) if ($token =~ /^LOOP$/i);
$self->_over($token,$last);
$self->_push_level($self->{ '_level' }, $token, $last);
if ($token =~ /^IF$/i) {
$self->{ '_is_in_if' } = 1;
}
}
}
}
elsif ($token =~ /^THEN$/i)
{
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last) if ($self->{ '_is_in_if' } and $#{ $self->{ '_level_stack' } } >= 0);
if ($self->{ '_is_in_case' } && defined $self->_next_token() and $self->_next_token() !~ /^(\(|RAISE)$/i) {
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last) if ($#{ $self->{ '_level_stack' } } >= 0);
$self->_over($token,$last);
}
if ($self->{ '_is_in_case' } && defined $self->_next_token() and
$self->_next_token() eq '(' and $self->{ '_tokens' }[1] !~ /^(SELECT|CASE)$/i
)
{
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last) if ($#{ $self->{ '_level_stack' } } >= 0);
$self->_over($token,$last);
}
$self->{ '_is_in_if' } = 0;
}
elsif ( $token =~ /^(?:ELSE|ELSIF)$/i )
{
$self->_back($token, $last);
$self->_new_line($token,$last);
$self->_add_token( $token );
$self->_new_line($token,$last) if ($token !~ /^ELSIF$/i);
$self->_over($token,$last);
}
elsif ( $token =~ /^(?:END)$/i )
{
$self->{ '_first_when_in_case' } = 0;
if ($self->{ '_is_in_case' })
{
$self->{ '_is_in_case' }--;
$self->_back($token, $last);
$self->_set_level($self->_pop_level($token, $last), $token, $last);
if (!$self->{ '_is_in_create_function' } or $self->_next_token eq ';')
{
$self->_new_line($token,$last);
$self->_add_token( $token );
next;
}
}
# When we are not in a function code block (0 is the main begin/end block of a function)
elsif ($self->{ '_is_in_block' } == -1 && $last ne ',')
{
# END is closing a create function statement so reset position to begining
if ($self->_next_token !~ /^(IF|LOOP|CASE|INTO|FROM|END|ELSE|AND|OR|WHEN|AS|,)$/i) {
$self->_reset_level($token, $last);
} else
{
# otherwise back to last level stored at CASE keyword
$self->_set_level($self->_pop_level($token, $last), $token, $last);
}
}
# We reach the last end of the code
elsif ($self->{ '_is_in_block' } > -1 and $self->_next_token =~/^(;|\$.*\$)$/ and !$self->{ '_is_in_exception' })
{
if ($self->{ '_is_in_block' } == 0)
{
$self->_reset_level($token, $last);
} else {
$self->_set_level($self->_pop_level($token, $last) - 1, $token, $last);
$self->{ '_is_in_block' }--;
}
}
# We are in code block
elsif ($last ne ',')
{
# decrease the block level if this is a END closing a BEGIN block
if ($self->_next_token !~ /^(IF|LOOP|CASE|INTO|FROM|END|ELSE|AND|OR|WHEN|AS|,)$/i)
{
$self->{ '_is_in_block' }--;
}
# Go back to level stored with IF/LOOP/BEGIN/EXCEPTION block
if ($self->{ '_is_in_block' } > -1)
{
$self->_set_level($self->_pop_level($token, $last), $token, $last);
} else {
$self->_reset_level($token, $last);
}
}
if ($self->_next_token eq ';') {
$self->_set_level(pop( @{ $self->{ '_begin_level' } } ), $token, $last);
} elsif (!$self->{ '_is_in_exception' } and $self->_next_token !~ /^(AS|CASE|FROM|,)$/i) {
$self->_back($token, $last) if ($self->_next_token =~ /^(IF|LOOP|CASE|INTO|FROM|END|ELSE|AND|OR|WHEN|AS|,)$/i);
}
$self->_new_line($token,$last);
$self->_add_token( $token );
}
elsif ( $token =~ /^(?:END::[^\s]+)$/i and $self->{ '_is_in_case' } )
{
$self->{ '_first_when_in_case' } = 0;
if ($self->{ '_is_in_case' })
{
$self->{ '_is_in_case' }--;
$self->_back($token, $last);
$self->_set_level($self->_pop_level($token, $last), $token, $last);
}
$self->_new_line($token,$last);
$self->_add_token( $token );
}
elsif ( $token =~ /^(?:UNION|INTERSECT|EXCEPT)$/i )
{
$self->{ 'no_break' } = 0;
if ($self->{ '_is_in_join' })
{
$self->_back($token, $last);
$self->{ '_is_in_join' } = 0;
}
$self->_back($token, $last) unless defined $last and $last eq '(';
$self->_new_line($token,$last);
$self->_add_token( $token );
$self->_new_line($token,$last) if ( defined $self->_next_token
and $self->_next_token ne '('
and $self->_next_token !~ /^ALL$/i
);
$self->{ '_is_in_where' }-- if ($self->{ '_is_in_where' });
$self->{ '_is_in_from' } = 0;
}
elsif ( $token =~ /^(?:LEFT|RIGHT|FULL|INNER|OUTER|CROSS|NATURAL)$/i and (not defined $last or uc($last) ne 'MATCH') )
{
$self->{ 'no_break' } = 0;
if (!$self->{ '_is_in_join' } and ($last and $last ne ')') )
{
$self->_back($token, $last);
}
if ($self->{ '_has_over_in_join' })
{
$self->{ '_has_over_in_join' } = 0;
$self->_back($token, $last);
}
if ( $token =~ /(?:LEFT|RIGHT|FULL|CROSS|NATURAL)$/i )
{
$self->_new_line($token,$last);
$self->_over($token,$last) if ( $self->{ '_level' } == 0 || ($self->{ '_is_in_with' } > 1 and $self->{ '_level' } == 1));
}
if ( ($token =~ /(?:INNER|OUTER)$/i) && ($last !~ /(?:LEFT|RIGHT|CROSS|NATURAL|FULL)$/i) )
{
$self->_new_line($token,$last);
$self->_over($token,$last) if (!$self->{ '_is_in_join' });
}
$self->_add_token( $token );
}
elsif ( $token =~ /^(?:JOIN)$/i and !$self->{ '_is_in_operator' })
{
$self->{ 'no_break' } = 0;
if ( not defined $last or $last !~ /^(?:LEFT|RIGHT|FULL|INNER|OUTER|CROSS|NATURAL)$/i )
{
$self->_new_line($token,$last);
$self->_back($token, $last) if ($self->{ '_has_over_in_join' });
$self->{ '_has_over_in_join' } = 0;
}
$self->_add_token( $token );
$self->{ '_is_in_join' } = 1;
}
elsif ( $token =~ /^(?:AND|OR)$/i )
{
$self->{ '_is_in_where' } = 0;
# Try to detect AND in BETWEEN clause to prevent newline insert
if (uc($token) eq 'AND' and ($self->{ '_is_in_between' }
|| (defined $last && $last =~ /^(PRECEDING|FOLLOWING|ROW)$/i)))
{
$self->_add_token( $token );
$last = $self->_set_last($token, $last);
$self->{ '_is_in_between' } = 0;
next;
}
$self->{ 'no_break' } = 0;
if ($self->{ '_is_in_join' })
{
$self->_over($token,$last);
$self->{ '_has_over_in_join' } = 1;
}
$self->{ '_is_in_join' } = 0;
if ( !$self->{ '_is_in_if' } and !$self->{ '_is_in_index' }
and (!$last or $last !~ /^(?:CREATE)$/i)
and ($self->{ '_is_in_create' } <= 2)
and !$self->{ '_is_in_trigger' }
)
{
$self->_new_line($token,$last);
if (!$self->{'_and_level'} and (!$self->{ '_level' } || $self->{ '_is_in_alter' })) {
$self->_over($token,$last);
} elsif ($self->{'_and_level'} and !$self->{ '_level' } and uc($token) eq 'OR') {
$self->_over($token,$last);
} elsif ($#{$self->{ '_level_stack' }} >= 0 and $self->{ '_level' } == $self->{ '_level_stack' }[-1]) {
$self->_over($token,$last);
}
}
$self->_add_token( $token );
$self->{'_and_level'}++;
}
elsif ( $token =~ /^\/\*.*\*\/$/s )
{
if ( !$self->{ 'no_comments' } )
{
$token =~ s/\n[\s\t]+\*/\n\*/gs;
if (!$self->{ '_is_in_over' } and !$self->{ '_is_in_function' })
{
$self->_new_line($token,$last), $self->_add_token('') if (defined $last and $last eq ';');
$self->_new_line($token,$last);
}
$self->_add_token( $token );
$self->{ 'break' } = "\n" unless ( $self->{ 'spaces' } != 0 );
if (!$self->{ '_is_in_function' } and !$self->{ '_is_in_over' }
and (!$self->_is_comment($token) or !defined $self->_next_token
or $self->_next_token ne ')')
)
{
$self->_new_line($token,$last);
}
$self->{ 'break' } = " " unless ( $self->{ 'spaces' } != 0 );
}
}
elsif (($token =~ /^USING$/i and !$self->{ '_is_in_order_by' } and !$self->{ '_is_in_exception' }
and ($self->{ '_current_sql_stmt' } ne 'DELETE' or uc($self->_next_token) !~ /^(\(|LATERAL)$/i))
or (uc($token) eq 'WITH' and uc($self->_next_token()) eq 'CHECK' and $self->{ '_is_in_policy' })
)
{
if (!$self->{ '_is_in_from' })
{
$self->_over($token,$last) if ($self->{ '_is_in_operator' } || ($self->{ '_is_in_policy' } && !$self->{ 'format_type' } && !$self->{ '_is_in_using' }));
$self->_push_level($self->{ '_level' }, $token, $last) if ($token =~ /^USING$/i);
$self->_set_level($self->_pop_level($token, $last), $token, $last) if (uc($token) eq 'WITH' and $self->{ '_is_in_policy' } > 1 && !$self->{ 'format_type' } && $self->{ '_is_in_using' });
$self->_new_line($token,$last) if (uc($last) ne 'EXCLUDE' and !$self->{ '_is_in_index' } and !$self->{ '_is_in_function' });
}
else
{
# USING from join clause disable line break like in function
$self->{ '_is_in_function' }++;
# Restore FROM position
$self->_set_level($self->_pop_level($token, $last), $token, $last) if (!$self->{ '_is_in_join' });
}
$self->_add_token($token);
$self->{ '_is_in_using' } = 1;
$self->{ '_is_in_policy' }++ if (!$self->{ '_is_in_from' } && !$self->{ '_is_in_join' }
&& uc($last) ne 'EXCLUDE' && !$self->{ '_is_in_function' }
&& !$self->{ '_is_in_operator' } && !$self->{ '_is_in_create' }
&& !$self->{ '_is_in_index' });
}
elsif ($token =~ /^EXCLUDE$/i)
{
if ($last !~ /^(FOLLOWING|ADD)$/i or $self->_next_token !~ /^USING$/i) {
$self->_new_line($token,$last) if ($last !~ /^(FOLLOWING|ADD)$/i);
}
$self->_add_token( $token );
$self->{ '_is_in_using' } = 1;
}
elsif ($token =~ /^\\\S/)
{
# treat everything starting with a \ and at least one character as psql meta command.
$self->_add_token( $token );
$self->_new_line($token,$last) if ($token ne "\\\\" and defined $self->_next_token and $self->_next_token ne "\\\\");
}
elsif ($token =~ /^(ADD|DROP)$/i && ($self->{ '_current_sql_stmt' } eq 'SEQUENCE'
|| $self->{ '_current_sql_stmt' } eq 'ALTER'))
{
if ($self->_next_token !~ /^(NOT|NULL|DEFAULT)$/i and (not defined $last or !$self->{ '_is_in_alter' } or $last ne '(')) {
$self->_new_line($token,$last);
if ($self->{ '_is_in_alter' } < 2) {
$self->_over($token,$last);
}
}
$self->_add_token($token, $last);
$self->{ '_is_in_alter' }++ if ($self->{ '_is_in_alter' } == 1);
}
elsif ($token =~ /^INCREMENT$/i && $self->{ '_current_sql_stmt' } eq 'SEQUENCE')
{
$self->_new_line($token,$last);
$self->_add_token($token);
}
elsif ($token =~ /^NO$/i and $self->_next_token =~ /^(MINVALUE|MAXVALUE)$/i)
{
$self->_new_line($token,$last);
$self->_add_token($token);
}
elsif ($last !~ /^(\(|NO)$/i and $token =~ /^(MINVALUE|MAXVALUE)$/i)
{
$self->_new_line($token,$last);
$self->_add_token($token);
}
elsif ($token =~ /^CACHE$/i)
{
$self->_new_line($token,$last);
$self->_add_token($token);
}
else
{
next if ($self->{'keep_newline'} and $token =~ /^\s+$/);
if ($self->{ '_fct_code_delimiter' } and $self->{ '_fct_code_delimiter' } =~ /^'.*'$/) {
$self->{ '_fct_code_delimiter' } = "";
$self->{ '_language_sql' } = 0;
}
if ($self->{ '_is_in_block' } != -1 and !$self->{ '_fct_code_delimiter' })
{
$self->{ '_is_in_block' } = -1;
$self->{ '_is_in_procedure' } = 0;
$self->{ '_is_in_function' } = 0;
}
# special case with comment
if ($token =~ /(?:\s*--)[\ \t\S]*/s)
{
if ( !$self->{ 'no_comments' } )
{
$token =~ s/^(\s*)(--.*)/$2/s;
my $start = $1 || '';
if ($start =~ /\n/s) {
$self->_new_line($token,$last), $self->_add_token('') if (defined $last and $last eq ';' and $self->{ 'content' } !~ /\n$/s);
$self->_new_line($token,$last);
}
$token =~ s/\s+$//s;
$token =~ s/^\s+//s;
$self->_add_token( $token );
$self->_new_line($token,$last) if ($start || ($self->{ 'content' } !~ /\n$/s && defined $self->_next_token && uc($self->_next_token) ne 'AS'));
# Add extra newline after the last comment if we are not in a block or a statement
if (defined $self->_next_token and $self->_next_token !~ /^\s*--/)
{
$self->{ 'content' } .= "\n" if ($self->{ '_is_in_block' } == -1
and !$self->{ '_is_in_declare' }
and !$self->{ '_fct_code_delimiter' }
and !$self->{ '_current_sql_stmt' }
and defined $last and $self->_is_comment($last)
and $self->{ 'content' } !~ /\n$/s
);
}
$last = $self->_set_last($token, $last);
}
next;
}
if ($last =~ /^(?:SEQUENCE)$/i and $self->_next_token !~ /^(OWNED|;)$/i)
{
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_over($token,$last);
}
else
{
if (defined $last && $last eq ')' && (!defined $self->_next_token || $self->_next_token ne ';'))
{
if (!$self->{ '_parenthesis_level' } && $self->{ '_is_in_from' })
{
$self->_set_level(pop(@{ $self->{ '_level_parenthesis' } }) || 1, $token, $last);
}
}
if (defined $last and uc($last) eq 'UPDATE' and $self->{ '_current_sql_stmt' } eq 'UPDATE')
{
$self->_new_line($token,$last);
$self->_over($token,$last);
}
if (defined $last and uc($last) eq 'AS' and uc($token) eq 'WITH') {
$self->_new_line($token,$last);
}
if (uc($token) eq 'INSERT' and defined $last and $last eq ';')
{
if ($#{ $self->{ '_level_stack' } } >= 0) {
$self->_set_level($self->{ '_level_stack' }[-1], $token, $last);
} else {
$self->_back($token,$last);
}
}
if (($self->{ '_is_in_policy' } > 1 || ($self->{ '_is_in_policy' } && $self->{ '_is_in_sub_query' })) && $token =~ /^(ALL|SELECT|UPDATE|DELETE|INSERT)$/i)
{
$self->_new_line($token,$last);
$self->_over($token,$last);
$self->_add_token( $token );
$self->_new_line($token,$last);
$self->_over($token,$last);
$last = $self->_set_last($token, $last);
next;
}
$self->{ '_is_in_policy' }++ if ($token =~ /^SELECT$/i and $self->{ '_is_in_policy' });
if ($self->{ 'comma_break' } and $self->{ '_current_sql_stmt' } eq 'INSERT' && $last eq '(')
{
$self->_new_line($token,$last);
}
# Remove extra newline at end of code of SQL functions
if ($token eq "'" and $last eq ';' and $self->_next_token =~ /^(;|LANGUAGE|STRICT|SET|IMMUTABLE|STABLE|VOLATILE)$/i) {
$self->{ 'content' } =~ s/\s+$/\n/s;
}
# Finally add the token without further condition
$self->_add_token( $token, $last );
if ($last eq "'" and $token =~ /^(BEGIN|DECLARE)$/i)
{
$last = $self->_set_last($token, $last);
$self->_new_line($token,$last);
$self->_over($token,$last);
}
# Reset CREATE statement flag when using CTE
if ($self->{ '_is_in_create' } && $self->{ '_is_in_with' }
&& uc($token) eq 'WITH' && uc($last) eq 'AS')
{
$self->{ '_is_in_create' } = 0;
}
if (defined $last && uc($last) eq 'LANGUAGE' && (!defined $self->_next_token || $self->_next_token ne ';'))
{
$self->_new_line($token,$last);
}
}
}
$last = $self->_set_last($token, $last);
$pos++;
}
if ($self->{ 'no_extra_line' })
{
$self->_new_line() if ($self->{ 'content' } !~ /;$/s);
$self->{ 'content' } =~ s/\s+$/\n/s;
}
else
{
$self->_new_line();
}
# Attempt to eliminate redundant parenthesis in DML queries
while ($self->{ 'content' } =~ s/(\s+(?:WHERE|SELECT|FROM)\s+[^;]+)[\(]{2}([^\(\)]+)[\)]{2}([^;]+)/$1($2)$3/igs) {};
return;
}
sub _lower
{
my ( $self, $token ) = @_;
if ($DEBUG) {
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_ADD: line: $line => token=$token\n";
}
return lc($token);
}
=head2 _add_token
Add a token to the beautified string.
Code lifted from SQL::Beautify
=cut
sub _add_token
{
my ( $self, $token, $last_token ) = @_;
if ($DEBUG)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_ADD: line: $line => last=", ($last_token||''), ", token=$token\n";
}
if ( $self->{ 'wrap' } )
{
my $wrap;
if ( $self->_is_keyword( $token, $self->_next_token(), $last_token ) ) {
$wrap = $self->{ 'wrap' }->{ 'keywords' };
}
elsif ( $self->_is_constant( $token ) ) {
$wrap = $self->{ 'wrap' }->{ 'constants' };
}
if ( $wrap ) {
$token = $wrap->[ 0 ] . $token . $wrap->[ 1 ];
}
}
if ($self->{keep_newline} and $self->{ '_is_in_block' } >= 0 and $token =~ /^[\r\n]+$/s
and defined $last_token and $last_token eq ';'
)
{
$token =~ s/^[\r\n]+$/\n/s;
$self->{ 'content' } =~ s/\s+$/\n/s;
$self->{ 'content' } .= $token if ($self->{ 'content' } !~ /[\n]{2,}$/s);
return;
}
my $last_is_dot = defined( $last_token ) && $last_token eq '.';
my $sp = $self->_indent;
if ( !$self->_is_punctuation( $token ) and !$last_is_dot)
{
if ( (!defined($last_token) || $last_token ne '(') && $token ne ')' && $token !~ /^::/ )
{
if ($token ne ')'
&& defined($last_token)
&& $last_token !~ '::$'
&& $last_token ne '['
&& ($token ne '(' || !$self->_is_function( $last_token ) || $self->{ '_is_in_type' })
)
{
print STDERR "DEBUG_SPC: 1) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
if ( ($token !~ /PGFESCQ[12]/ or $last_token !~ /'$/)
and ($last_token !~ /PGFESCQ[12]/ or $token !~ /^'/)
)
{
if ($token !~ /^['"].*['"]$/ or $last_token ne ':')
{
if ($token =~ /AAKEYWCONST\d+AA\s+AAKEYWCONST\d+AA/) {
$token =~ s/(AAKEYWCONST\d+AA)/$sp$1/gs;
} else {
$self->{ 'content' } .= $sp if (!$self->{ 'no_space_function' } or $token ne '('
or (!$self->{ '_is_in_drop_function' }
and !$self->{ '_is_in_create_function' }
and !$self->{ '_is_in_trigger' }));
if ($self->{ 'no_space_function' } and $token eq '(' and !$self->_is_keyword( $last_token, $token, undef ))
{
$self->{ 'content' } =~ s/$sp$//s;
}
}
}
}
}
elsif (!defined($last_token) && $token)
{
print STDERR "DEBUG_SPC: 2) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
elsif ($token eq '(' and $self->{ '_is_in_create' } == 2 and $self->{ 'content' } !~ /$sp$/)
{
print STDERR "DEBUG_SPC: 2b) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
elsif (defined $last_token && $self->_is_comment($last_token))
{
print STDERR "DEBUG_SPC: 2c) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
}
elsif ( defined $last_token && $last_token eq '(' && $token ne ')'
&& $token !~ /^::/ && !$self->{'wrap_after'} && $self->{ '_is_in_with' } == 1)
{
print STDERR "DEBUG_SPC: 3) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
elsif ( $self->{ '_is_in_create' } == 2 && defined($last_token))
{
if ($last_token ne '::' and !$self->{ '_is_in_partition' }
and !$self->{ '_is_in_policy' }
and !$self->{ '_is_in_trigger' }
and !$self->{ '_is_in_aggregate' }
and ($last_token ne '(' || !$self->{ '_is_in_index' }))
{
print STDERR "DEBUG_SPC: 4) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp if ($last_token ne '(' or !$self->{ '_is_in_function' });
}
}
elsif (defined $last_token and (!$self->{ '_is_in_operator' } or !$self->{ '_is_in_alter' }))
{
if ($last_token eq '(' and ($self->{ '_is_in_type' } or ($self->{ '_is_in_operator' }
and !$self->_is_type($token, $last_token, $self->_next_token))))
{
print STDERR "DEBUG_SPC: 5a) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
elsif ($self->{ 'comma_break' } and $self->{ '_current_sql_stmt' } eq 'INSERT')
{
print STDERR "DEBUG_SPC: 5b) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
}
elsif ($token eq ')' and $self->{ '_is_in_block' } >= 0 && $self->{ '_is_in_create' })
{
print STDERR "DEBUG_SPC: 6) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
$self->{ 'content' } .= $sp;
}
else
{
print STDERR "DEBUG_SPC: 7) last=", ($last_token||''), ", token=$token\n" if ($DEBUG_SP);
}
if ($self->_is_comment($token))
{
my @lines = split(/\n/, $token);
for (my $i = 1; $i <= $#lines; $i++) {
if ($lines[$i] =~ /^\s*\*/) {
$lines[$i] =~ s/^\s*\*/$sp */;
} elsif ($lines[$i] =~ /^\s+[^\*]/) {
$lines[$i] =~ s/^\s+/$sp /;
}
}
$token = join("\n", @lines);
}
else
{
$token =~ s/\n/\n$sp/gs if ($self->{ '_is_in_function' } and $self->{ '_fct_code_delimiter' } eq "'");
}
}
my $next_token = $self->_next_token || '';
my @cast = ();
my @next_cast = ();
# Be sure that we not going to modify a constant
if ($self->{ '_is_in_create' } < 2 and $token !~ /^[E]*'.*'$/)
{
@cast = split(/::/, $token, -1);
$token = shift(@cast) if ($#cast >= 0);
@next_cast = split(/::/, $next_token);
$next_token = shift(@next_cast) if ($#next_cast >= 0);
}
# lowercase/uppercase keywords taking care of function with same name
if ($self->_is_keyword( $token, $next_token, $last_token ) and
(!$self->_is_type($next_token) or $self->{ '_is_in_create' } < 2 or $self->{ '_is_in_cast' }
or ($self->{ '_is_in_create' } == 2 and $token =~ /^(WITH|WITHOUT)$/i)
or $self->{ '_is_in_create_function' } or uc($token) eq 'AS')
and ($next_token ne '(' or (defined $last_token and $last_token =~ /^(CREATE|ALTER)$/i)
or !$self->_is_function( $token ))
)
{
# Be sure that we are not formating with time zone
if (uc($token) ne 'WITH' or not defined $next_token
or $next_token !~ /^(time|timestamp)$/i)
{
$token = lc( $token ) if ( $self->{ 'uc_keywords' } == 1 );
$token = uc( $token ) if ( $self->{ 'uc_keywords' } == 2 );
$token = ucfirst( lc( $token ) ) if ( $self->{ 'uc_keywords' } == 3 );
}
}
else
{
# lowercase/uppercase known functions or words followed by an open parenthesis
# if the token is not a keyword, an open parenthesis or a comment
my $fct = $self->_is_function( $token, $last_token, $next_token ) || '';
if (($fct and $next_token eq '(' and defined $last_token and uc($last_token) ne 'CREATE')
or (!$self->_is_keyword( $token, $next_token, $last_token ) and !$next_token eq '('
and $token ne '(' and !$self->_is_comment( $token )) )
{
$token =~ s/$fct/\L$fct\E/i if ( $self->{ 'uc_functions' } == 1 );
$token =~ s/$fct/\U$fct\E/i if ( $self->{ 'uc_functions' } == 2 );
$fct = ucfirst( lc( $fct ) );
$token =~ s/$fct/$fct/i if ( $self->{ 'uc_functions' } == 3 );
}
# case of (NEW|OLD).colname keyword that need to formatted too
if (($self->{ '_is_in_create_function' } or $self->{ '_fct_code_delimiter' } or $self->{ '_is_in_rule' })
and $token =~ /^(NEW|OLD)\./i)
{
$token =~ s/^(OLD|NEW)\./\L$1\E\./i if ( $self->{ 'uc_keywords' } == 1 );
$token =~ s/^(OLD|NEW)\./\U$1\E\./i if ( $self->{ 'uc_keywords' } == 2 );
$token =~ s/^OLD\./\UOld\E\./i if ( $self->{ 'uc_keywords' } == 3 );
$token =~ s/^NEW\./\UNew\E\./i if ( $self->{ 'uc_keywords' } == 3 );
}
}
my $tk_is_type = $self->_is_type($token, $last_token, $next_token);
if ($token =~ /^(AT|SET)$/i)
{
$self->{ '_not_a_type' } = 1;
}
elsif (!$tk_is_type)
{
$self->{ '_not_a_type' } = 0;
}
# Type are always lowercase
if (!$self->{ '_not_a_type' } and ($self->{ '_is_in_create' } or $self->{ '_is_in_declare' }
or $self->{ '_is_in_cast' } or $self->{ '_is_in_type' } or $self->{ '_is_in_alter' } ))
{
if ($tk_is_type and defined $last_token
or ($token =~ /^(WITH|WITHOUT)$/i and $next_token =~ /^(time|timestamp)$/i)
)
{
if ($last_token =~ /^(AS|RETURNS|INOUT|IN|OUT)$/i or !$self->_is_keyword($last_token)
or $self->_is_type($last_token) or $self->_is_type($next_token))
{
$token = lc( $token ) if ( $self->{ 'uc_types' } == 1 );
$token = uc( $token ) if ( $self->{ 'uc_types' } == 2 );
$token = ucfirst( lc( $token ) ) if ( $self->{ 'uc_types' } == 3 );
}
}
}
# Add formatting for HTML output
if ( $self->{ 'colorize' } && $self->{ 'format' } eq 'html' ) {
$token = $self->highlight_code($token, $last_token, $next_token);
}
foreach my $c (@cast)
{
my @words = split(/(\s+)/, $c);
$c = '';
foreach my $w (@words)
{
if (!$self->_is_type($token))
{
$c .= $w;
}
else
{
$c .= lc($w) if ( $self->{ 'uc_types' } == 1 );
$c .= uc($w) if ( $self->{ 'uc_types' } == 2 );
$c .= ucfirst( lc( $w ) ) if ( $self->{ 'uc_types' } == 3 );
}
}
$token .= '::' . $c;
}
# Format cast in function code
my $reg = join('|', @{$self->{ 'types' }});
$reg = '(?:TIMESTAMP(\s*\(\s*\d+\s*\))? WITH TIME ZONE|TIMESTAMP(\s*\(\s*\d+\s*\))? WITHOUT TIME ZONE|CHARACTER VARYING|' . $reg . ')';
if ($token =~ /::/)
{
$token =~ s/::($reg)/'::' . lc($1)/igse if ( $self->{ 'uc_types' } == 1 );
$token =~ s/::($reg)/'::' . uc($1)/igse if ( $self->{ 'uc_types' } == 2 );
$token =~ s/::($reg)/'::' . ucfirst(lc($1))/igse if ( $self->{ 'uc_types' } == 3 );
}
# special case for MySQL
if ($token =~ /^(;|\$\$|\/\/)$/ and $self->{ 'content' } =~ /DELIMITER\s*$/)
{
$self->{ 'content' } .= ' ' if ($self->{ 'content' } !~ /DELIMITER\s$/);
}
$self->{ 'content' } .= $token;
# This can't be the beginning of a new line anymore.
$self->{ '_new_line' } = 0;
}
=head2 _over
Increase the indentation level.
Code lifted from SQL::Beautify
=cut
sub _over
{
my ( $self, $token, $last ) = @_;
if ($DEBUG) {
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_OVER: line: $line => last=$last, token=$token\n";
}
++$self->{ '_level' };
}
=head2 _back
Decrease the indentation level.
Code lifted from SQL::Beautify
=cut
sub _back
{
my ( $self, $token, $last ) = @_;
if ($DEBUG) {
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_BACK: line: $line => last=$last, token=$token\n";
}
--$self->{ '_level' } if ( $self->{ '_level' } > 0 );
}
=head2 _indent
Return a string of spaces according to the current indentation level and the
spaces setting for indenting.
Code lifted from SQL::Beautify
=cut
sub _indent
{
my ( $self ) = @_;
if ( $self->{ '_new_line' } )
{
return $self->{ 'space' } x ( $self->{ 'spaces' } * ( $self->{ '_level' } // 0 ) );
}
# When this is not for identation force using space
else
{
return ' ';
}
}
=head2 _new_line
Add a line break, but make sure there are no empty lines.
Code lifted from SQL::Beautify
=cut
sub _new_line
{
my ( $self, $token, $last ) = @_;
if ($DEBUG and defined $token) {
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_NL: line: $line => last=", ($last||''), ", token=$token\n";
}
$self->{ 'content' } .= $self->{ 'break' } unless ( $self->{ '_new_line' } );
$self->{ '_new_line' } = 1;
}
=head2 _next_token
Have a look at the token that's coming up next.
Code lifted from SQL::Beautify
=cut
sub _next_token
{
my ( $self ) = @_;
return @{ $self->{ '_tokens' } } ? $self->{ '_tokens' }->[ 0 ] : undef;
}
=head2 _token
Get the next token, removing it from the list of remaining tokens.
Code lifted from SQL::Beautify
=cut
sub _token
{
my ( $self ) = @_;
return shift @{ $self->{ '_tokens' } };
}
=head2 _is_keyword
Check if a token is a known SQL keyword.
Code lifted from SQL::Beautify
=cut
sub _is_keyword
{
my ( $self, $token, $next_token, $last_token ) = @_;
return 0 if (!$token);
# Remove cast if any
$token =~ s/::[^:]+$//;
# Fix some false positive
if (defined $next_token)
{
return 0 if (uc($token) eq 'LEVEL' and uc($next_token) ne 'SECURITY');
return 0 if (uc($token) eq 'EVENT' and uc($next_token) ne 'TRIGGER');
}
return 0 if ($token =~ /^(LOGIN|RULE)$/i and !$self->{ '_is_in_create' } and !$self->{ '_is_in_alter' } and !$self->{ '_is_in_drop' } and !$self->{ '_is_in_rule' });
return 0 if (uc($token) eq 'COMMENT' and (not defined $next_token or $next_token) !~ /^ON|IS$/i);
if (defined $last_token)
{
return 0 if (uc($token) eq 'KEY' and $last_token !~ /^(PRIMARY|FOREIGN|PARTITION|NO)$/i);
return 0 if ($token =~ /^(BTREE|HASH|GIST|SPGIST|GIN|BRIN)$/i and $last_token !~ /^(USING|BY)$/i);
return 0 if (uc($token) eq 'NOTICE' and uc($last_token) ne 'RAISE');
return 0 if ( ($self->{ '_is_in_type' } or $self->{ '_is_in_create' }) and $last_token =~ /^(OF|FROM)$/i);
return 0 if (uc($last_token) eq 'AS' and $token !~ /^(IDENTITY|SELECT|ENUM|TRANSACTION|UPDATE|DELETE|INSERT|MATERIALIZED|ON|VALUES|RESTRICTIVE|PERMISSIVE|UGLY|EXECUTE|STORAGE|OPERATOR|RANGE|NOT)$/i);
return 0 if ($token =~ /^(TYPE|SCHEMA)$/i and $last_token =~ /^(COLUMN|\(|,|\||\))/i);
return 0 if ($token =~ /^TYPE$/i and $last_token !~ /^(CREATE|DROP|ALTER|FOR)$/i
and !$self->{ '_is_in_alter' }
and !grep({ uc($_) eq uc( $next_token ) } @{ $self->{ 'types' } })
);
}
if ($DEBUG and defined $token and grep { $_ eq uc( $token ) } @{ $self->{ 'keywords' } }) {
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_KEYWORD: line: $line => last=", ($last_token||''), ", token=$token, next=", ($next_token||''), "\n";
}
return ~~ grep { $_ eq uc( $token ) } @{ $self->{ 'keywords' } };
}
=head2 _is_type
Check if a token is a known SQL type
=cut
sub _is_type
{
my ( $self, $token, $last_token, $next_token ) = @_;
return if (!defined $token);
return if (defined $next_token and $next_token =~ /^(SEARCH)$/i);
if ($DEBUG and defined $token)
{
my ($package, $filename, $line) = caller;
print STDERR "DEBUG_TYPE: line: $line => token=[$token], last=", ($last_token||''), ", next=", ($next_token||''), ", type=", (grep { uc($_) eq uc( $token ) } @{ $self->{ 'types' } }), "\n";
}
return 0 if ($token =~ /^(int4|int8|num|tstz|ts|date)range$/i
and (not defined $next_token or $next_token eq '('));
my @composite_types = (
'VARYING', 'PRECISION',
'WITH', 'WITHOUT', 'ZONE'
);
# Typically case of a data type used as an object name
if (defined $next_token)
{
if (grep { $_ eq uc( $token ) } @{ $self->{ 'types' } }
and grep { $_ eq uc( $next_token ) } @{ $self->{ 'types' } }
and !grep { $_ eq uc( $next_token ) } @composite_types)
{
return 0;
}
}
$token =~ s/\s*\(.*//; # remove any parameter to the type
return ~~ grep { $_ eq uc( $token ) } @{ $self->{ 'types' } };
}
sub _is_sql_keyword
{
my ( $self, $token ) = @_;
return ~~ grep { $_ eq uc( $token ) } @{ $self->{ 'sql_keywords' } };
}
=head2 _is_comment
Check if a token is a SQL or C style comment
=cut
sub _is_comment
{
my ( $self, $token ) = @_;
return 1 if ( $token =~ m#^\s*((?:--)[\ \t\S]*|/\*[\ \t\r\n\S]*?\*/)$#s );
return 0;
}
=head2 _is_function
Check if a token is a known SQL function.
Code lifted from SQL::Beautify and rewritten to check one long regexp instead of a lot of small ones.
=cut
sub _is_function
{
my ( $self, $token, $last_token, $next_token ) = @_;
return undef if (!$token);
if ( $token =~ $self->{ 'functions_re' } )
{
# Check the context of the function
if (defined $last_token and defined $next_token)
{
return undef if ($next_token ne '(');
return undef if ($self->{ '_is_in_create' } == 1);
}
return $1;
}
else
{
return undef;
}
}
=head2 add_keywords
Add new keywords to highlight.
Code lifted from SQL::Beautify
=cut
sub add_keywords
{
my $self = shift;
for my $keyword ( @_ ) {
push @{ $self->{ 'keywords' } }, ref( $keyword ) ? @{ $keyword } : $keyword;
}
}
=head2 _re_from_list
Create compiled regexp from prefix, suffix and and a list of values to match.
=cut
sub _re_from_list
{
my $prefix = shift;
my $suffix = shift;
my (@joined_list, $ret_re);
for my $list_item ( @_ ) {
push @joined_list, ref( $list_item ) ? @{ $list_item } : $list_item;
}
$ret_re = "$prefix(" . join('|', @joined_list) . ")$suffix";
return qr/$ret_re/i;
}
=head2 _refresh_functions_re
Refresh compiled regexp for functions.
=cut
sub _refresh_functions_re
{
my $self = shift;
$self->{ 'functions_re' } = _re_from_list( '\b[\.]*', '$', @{ $self->{ 'functions' } });
}
=head2 add_functions
Add new functions to highlight.
Code lifted from SQL::Beautify
=cut
sub add_functions
{
my $self = shift;
for my $function ( @_ ) {
push @{ $self->{ 'functions' } }, ref( $function ) ? @{ $function } : $function;
}
$self->_refresh_functions_re();
}
=head2 add_rule
Add new rules.
Code lifted from SQL::Beautify
=cut
sub add_rule
{
my ( $self, $format, $token ) = @_;
my $rules = $self->{ 'rules' } ||= {};
my $group = $rules->{ $format } ||= [];
push @{ $group }, ref( $token ) ? @{ $token } : $token;
}
=head2 _get_rule
Find custom rule for a token.
Code lifted from SQL::Beautify
=cut
sub _get_rule
{
my ( $self, $token ) = @_;
values %{ $self->{ 'rules' } }; # Reset iterator.
while ( my ( $rule, $list ) = each %{ $self->{ 'rules' } } ) {
return $rule if ( grep { uc( $token ) eq uc( $_ ) } @$list );
}
return;
}
=head2 _process_rule
Applies defined rule.
Code lifted from SQL::Beautify
=cut
sub _process_rule
{
my ( $self, $rule, $token ) = @_;
my $format = {
break => sub { $self->_new_line() },
over => sub { $self->_over() },
back => sub { $self->_back() },
token => sub { $self->_add_token( $token ) },
push => sub { push @{ $self->{ '_level_stack' } }, $self->{ '_level' } },
pop => sub { $self->{ '_level' } = $self->_pop_level($token, '') },
reset => sub { $self->{ '_level' } = 0; @{ $self->{ '_level_stack' } } = (); },
};
for ( split /-/, lc $rule ) {
&{ $format->{ $_ } } if ( $format->{ $_ } );
}
}
=head2 _is_constant
Check if a token is a constant.
Code lifted from SQL::Beautify
=cut
sub _is_constant
{
my ( $self, $token ) = @_;
return ( $token =~ /^\d+$/ or $token =~ /^(['"`]).*\1$/ );
}
=head2 _is_punctuation
Check if a token is punctuation.
Code lifted from SQL::Beautify
=cut
sub _is_punctuation
{
my ( $self, $token ) = @_;
if ($self->{ 'comma' } eq 'start' and $token eq ',') {
return 0;
}
return ( $token =~ /^[,;.\[\]]$/ );
}
=head2 _generate_anonymized_string
Simply generate a random string, thanks to Perlmonks.
Returns original in certain cases which don't require anonymization, like
timestamps, or intervals.
=cut
sub _generate_anonymized_string
{
my $self = shift;
my ( $before, $original, $after ) = @_;
# Prevent dates from being anonymized
return $original if $original =~ m{\A\d\d\d\d[/:-]\d\d[/:-]\d\d\z};
return $original if $original =~ m{\A\d\d[/:-]\d\d[/:-]\d\d\d\d\z};
# Prevent dates format like DD/MM/YYYY HH24:MI:SS from being anonymized
return $original if $original =~ m{
\A
(?:FM|FX|TM)?
(?:
HH | HH12 | HH24
| MI
| SS
| MS
| US
| SSSS
| AM | A\.M\. | am | a\.m\.
| PM | P\.M\. | pm | p\.m\.
| Y,YYY | YYYY | YYY | YY | Y
| IYYY | IYY | IY | I
| BC | B\.C\. | bc | b\.c\.
| AD | A\.D\. | ad | a\.d\.
| MONTH | Month | month | MON | Mon | mon | MM
| DAY | Day | day | DY | Dy | dy | DDD | DD | D
| W | WW | IW
| CC
| J
| Q
| RM | rm
| TZ | tz
| [\s/:-]
)+
(?:TH|th|SP)?
\z
};
# Prevent interval from being anonymized
return $original if ($before && ($before =~ /interval/i));
return $original if ($after && ($after =~ /^\)*::interval/i));
# Shortcut
my $cache = $self->{ '_anonymization_cache' };
# Range of characters to use in anonymized strings
my @chars = ( 'A' .. 'Z', 0 .. 9, 'a' .. 'z', '-', '_', '.' );
unless ( $cache->{ $original } ) {
# Actual anonymized version generation
$cache->{ $original } = join( '', map { $chars[ rand @chars ] } 1 .. 10 );
}
return $cache->{ $original };
}
=head2 anonymize
Anonymize litteral in SQL queries by replacing parameters with fake values
=cut
sub anonymize
{
my $self = shift;
my $query = $self->{ 'query' };
# just in case it has not been called in the main script
$query = $self->query() if (!$query);
return if ( !$query );
# Variable to hold anonymized versions, so we can provide the same value
# for the same input, within single query.
$self->{ '_anonymization_cache' } = {};
# Remove comments
$query =~ s/\/\*(.*?)\*\///gs;
# Clean query
$query =~ s/\\'//gs;
$query =~ s/('')+/\$EMPTYSTRING\$/gs;
# Anonymize each values
$query =~ s{
([^\s\']+[\s\(]*) # before
'([^']*)' # original
([\)]*::\w+)? # after
}{$1 . "'" . $self->_generate_anonymized_string($1, $2, $3) . "'" . ($3||'')}xeg;
$query =~ s/\$EMPTYSTRING\$/''/gs;
foreach my $k (keys %{ $self->{ 'keyword_constant' } }) {
$self->{ 'keyword_constant' }{$k} = "'" . $self->_generate_anonymized_string('', $self->{ 'keyword_constant' }{$k}, '') . "'";
}
$self->query( $query );
}
=head2 set_defaults
Sets defaults for newly created objects.
Currently defined defaults:
=over
=item spaces => 4
=item space => ' '
=item break => "\n"
=item uc_keywords => 2
=item uc_functions => 0
=item uc_types => 1
=item no_comments => 0
=item no_grouping => 0
=item placeholder => ''
=item multiline => 0
=item separator => ''
=item comma => 'end'
=item format => 'text'
=item colorize => 1
=item format_type => 0
=item wrap_limit => 0
=item wrap_after => 0
=item wrap_comment => 0
=item no_extra_line => 0
=item keep_newline => 0
=item no_space_function => 0
=back
=cut
sub set_defaults
{
my $self = shift;
$self->set_dicts();
# Set some defaults.
$self->{ 'query' } = '';
$self->{ 'spaces' } = 4;
$self->{ 'space' } = ' ';
$self->{ 'break' } = "\n";
$self->{ 'wrap' } = {};
$self->{ 'rules' } = {};
$self->{ 'uc_keywords' } = 2;
$self->{ 'uc_functions' } = 0;
$self->{ 'uc_types' } = 1;
$self->{ 'no_comments' } = 0;
$self->{ 'no_grouping' } = 0;
$self->{ 'placeholder' } = '';
$self->{ 'multiline' } = 0;
$self->{ 'keywords' } = $self->{ 'dict' }->{ 'pg_keywords' };
$self->{ 'types' } = $self->{ 'dict' }->{ 'pg_types' };
$self->{ 'functions' } = ();
push(@{ $self->{ 'functions' } }, keys %{ $self->{ 'dict' }->{ 'pg_functions' } });
$self->_refresh_functions_re();
$self->{ 'separator' } = '';
$self->{ 'comma' } = 'end';
$self->{ 'format' } = 'text';
$self->{ 'colorize' } = 1;
$self->{ 'format_type' } = 0;
$self->{ 'wrap_limit' } = 0;
$self->{ 'wrap_after' } = 0;
$self->{ 'wrap_comment' } = 0;
$self->{ 'no_extra_line' } = 0;
$self->{ 'keep_newline' } = 0;
$self->{ 'no_space_function' } = 0;
return;
}
=head2 format
Set output format - possible values: 'text' and 'html'
Default is text output. Returns 0 in case or wrong format and use default.
=cut
sub format
{
my $self = shift;
my $format = shift;
if ( grep(/^$format$/i, 'text', 'html') ) {
$self->{ 'format' } = lc($format);
return 1;
}
return 0;
}
=head2 set_dicts
Sets various dictionaries (lists of keywords, functions, symbols, and the like)
This was moved to separate function, so it can be put at the very end of module
so it will be easier to read the rest of the code.
=cut
sub set_dicts
{
my $self = shift;
# First load it all as "my" variables, to make it simpler to modify/map/grep/add
# Afterwards, when everything is ready, put it in $self->{'dict'}->{...}
my @pg_keywords = map { uc } qw(
ADD AFTER AGGREGATE ALL ALSO ALTER ALWAYS ANALYSE ANALYZE AND ANY ARRAY AS ASC ASYMMETRIC AUTHORIZATION ATTACH AUTO_INCREMENT
BACKWARD BEFORE BEGIN BERNOULLI BETWEEN BINARY BOTH BY CACHE CASCADE CASE CAST CHECK CHECKPOINT CLOSE CLUSTER
COLLATE COLLATION COLUMN COMMENT COMMIT COMMITTED CONCURRENTLY CONFLICT CONSTRAINT CONSTRAINT CONTINUE COPY
COST COSTS CREATE CROSS CUBE CURRENT CURRENT_DATE CURRENT_ROLE CURRENT_TIME CURRENT_TIMESTAMP CURRENT_USER CURSOR
CYCLE DATABASE DEALLOCATE DECLARE DEFAULT DEFERRABLE DEFERRED DEFINER DELETE DELIMITER DESC DETACH DISABLE DISTINCT
DO DOMAIN DROP EACH ELSE ELSIF ENABLE ENCODING END EVENT EXCEPTION EXCEPT EXCLUDE EXCLUDING EXECUTE EXISTS EXPLAIN EXTENSION FALSE FETCH FILTER
FIRST FOLLOWING FOR FOREIGN FORWARD FREEZE FROM FULL FUNCTION GENERATED GRANT GROUP GROUPING HAVING HASHES HASH
IDENTITY IF ILIKE IMMUTABLE IN INCLUDING INCREMENT INDEX INHERITS INITIALLY INNER INOUT INSERT INSTEAD
INTERSECT INTO INVOKER IS ISNULL ISOLATION JOIN KEY LANGUAGE LAST LATERAL LC_COLLATE LC_CTYPE LEADING
LEAKPROOF LEFT LEFTARG LEVEL LIKE LIMIT LIST LISTEN LOAD LOCALTIME LOCALTIMESTAMP LOCK LOCKED LOGGED LOGIN
LOOP MAPPING MATCH MAXVALUE MERGES MINVALUE MODULUS MOVE NATURAL NEXT NOTHING NOTICE ORDINALITY
NO NOCREATEDB NOCREATEROLE NOSUPERUSER NOT NOTIFY NOTNULL NOWAIT NULL OFF OF OIDS ON ONLY OPEN OPERATOR OR ORDER
OUTER OVER OVERLAPS OWNER PARTITION PASSWORD PERFORM PLACING POLICY PRECEDING PREPARE PRIMARY PROCEDURE RANGE
REASSIGN RECURSIVE REFERENCES REINDEX REMAINDER RENAME REPEATABLE REPLACE REPLICA RESET RESTART RESTRICT RETURN RETURNING
RETURNS REVOKE RIGHT RIGHTARG ROLE ROLLBACK ROLLUP ROWS ROW RULE SAVEPOINT SCHEMA SCROLL SECURITY SELECT SEQUENCE
SEQUENCE SERIALIZABLE SERVER SESSION_USER SET SETOF SETS SHOW SIMILAR SKIP SNAPSHOT SOME STABLE START STRICT
SYMMETRIC SYSTEM TABLE TABLESAMPLE TABLESPACE TEMPLATE TEMPORARY THEN TO TRAILING TRANSACTION TRIGGER TRUE
TRUNCATE TYPE UNBOUNDED UNCOMMITTED UNION UNIQUE UNLISTEN UNLOCK UNLOGGED UPDATE USER USING VACUUM VALUES
VARIADIC VERBOSE VIEW VOLATILE WHEN WHERE WINDOW WITH WITHIN WORK XOR ZEROFILL
CALL GROUPS INCLUDE OTHERS PROCEDURES ROUTINE ROUTINES TIES READ_ONLY SHAREABLE READ_WRITE
BASETYPE SFUNC STYPE SFUNC1 STYPE1 SSPACE FINALFUNC FINALFUNC_EXTRA FINALFUNC_MODIFY COMBINEFUNC SERIALFUNC DESERIALFUNC
INITCOND MSFUNC MINVFUNC MSTYPE MSSPACE MFINALFUNC MFINALFUNC_EXTRA MFINALFUNC_MODIFY MINITCOND SORTOP
STORED REFRESH MATERIALIZED RAISE WITHOUT
);
my @pg_types = qw(
BIGINT BIGSERIAL BIT BOOLEAN BOOL BOX BYTEA CHARACTER CHAR CIDR CIRCLE DATE DOUBLE INET INTEGER INTERVAL
JSONB JSON LINE LSEG MACADDR8 MACADDR MONEY NUMERIC OID PG_LSN POINT POLYGON PRECISION REAL SMALLINT SMALLSERIAL
SERIAL TEXT TIMESTAMPTZ TIMESTAMP TSQUERY TSVECTOR TXID_SNAPSHOT UUID XML VARYING VARCHAR ZONE FLOAT4
FLOAT8 FLOAT NAME TID INT4RANGE INT8RANGE NUMRANGE TSRANGE TSTZRANGE DATERANGE INT2 INT4 INT8 INT TIME
REGCLASS REGCONFIG REGDICTIONARY REGNAMESPACE REGOPER REGOPERATOR REGPROC REGPROCEDURE REGROLE REGTYPE
);
my @sql_keywords = map { uc } qw(
ABORT ABSOLUTE ACCESS ACTION ADMIN ALSO ALWAYS ASSERTION ASSIGNMENT AT ATTRIBUTE BIGINT BOOLEAN
CALLED CASCADED CATALOG CHAIN CHANGE CHARACTER CHARACTERISTICS COLUMNS COMMENTS CONFIGURATION
CONNECTION CONSTRAINTS CONTENT CONVERSION CSV CURRENT DATA DATABASES DAY DEC DECIMAL DEFAULTS DELAYED
DELIMITERS DESCRIBE DICTIONARY DISABLE DISCARD DOCUMENT DOUBLE ENABLE ENCLOSED ENCRYPTED ENUM ESCAPE ESCAPED
EXCLUSIVE EXTERNAL FIELD FIELDS FLOAT FLUSH FOLLOWING FORCE FUNCTIONS GLOBAL GRANTED GREATEST HANDLER
HEADER HOLD HOUR IDENTIFIED IGNORE IMMEDIATE IMPLICIT INDEXES INFILE INHERIT INLINE INPUT INSENSITIVE
INT INTEGER KEYS KILL LABEL LARGE LEAST LEVEL LINES LOCAL LOW_PRIORITY MATCH MINUTE MODE MODIFY MONTH NAMES
NATIONAL NCHAR NONE NOTHING NULLIF NULLS OBJECT OFF OPERATOR OPTIMIZE OPTION OPTIONALLY OPTIONS OUT OUTFILE
OWNED PARSER PARTIAL PASSING PLANS PRECISION PREPARED PRESERVE PRIOR PRIVILEGES PROCEDURAL QUOTE READ
REAL RECHECK REF REGEXP RELATIVE RELEASE RLIKE ROW SEARCH SECOND SEQUENCES SESSION SHARE SIMPLE
SMALLINT SONAME STANDALONE STATEMENT STATISTICS STATUS STORAGE STRAIGHT_JOIN SYSID TABLES TEMP TERMINATED
TREAT TRUSTED TYPES UNENCRYPTED UNKNOWN UNSIGNED UNTIL USE VALID VALIDATE VALIDATOR VALUE VARIABLES VARYING
WHITESPACE WORK WRAPPER WRITE XMLATTRIBUTES YEAR YES ZONE
);
my @redshift_keywords = map { uc } qw(
AES128 AES256 ALLOWOVERWRITE BACKUP BLANKSASNULL BYTEDICT BZIP2 CREDENTIALS CURRENT_USER_ID DEFLATE DEFRAG
DELTA DELTA32K DISABLE DISTKEY EMPTYASNULL ENABLE ENCODE ENCRYPT ENCRYPTION ESCAPE EXPLICIT GLOBALDICT256
GLOBALDICT64K GZIP INTERLEAVED LUN LUNS LZO LZOP MINUS MOSTLY13 MOSTLY32 MOSTLY8 NEW OFFLINE OFFSET OLD OID
PARALLEL PERCENT PERMISSIONS RAW READRATIO RECOVER REJECTLOG RESORT RESPECT RESTORE SORTKEY SYSDATE TAG TDES
TEXT255 TEXT32K TIMESTAMP TOP TRUNCATECOLUMNS UNLOAD WALLET ADDQUOTES
);
for my $k ( @pg_keywords ) {
next if grep { $k eq $_ } @sql_keywords;
push @sql_keywords, $k;
}
my @pg_functions = map { lc } qw(
ascii age bit_length btrim cardinality cast char_length character_length coalesce
brin_summarize_range brin_summarize_new_values
convert chr current_date current_time current_timestamp count decode date_part date_trunc
encode extract get_byte get_bit initcap isfinite interval justify_hours justify_days
lower length lpad ltrim localtime localtimestamp md5 now octet_length overlay position pg_client_encoding
quote_ident quote_literal repeat replace rpad rtrim substring split_part strpos substr set_byte set_bit
trim to_ascii to_hex translate to_char to_date to_timestamp to_number timeofday upper
abbrev abs abstime abstimeeq abstimege abstimegt abstimein abstimele
abstimelt abstimene abstimeout abstimerecv abstimesend aclcontains acldefault
aclexplode aclinsert aclitemeq aclitemin aclitemout aclremove acos
any_in any_out anyarray_in anyarray_out anyarray_recv anyarray_send anyelement_in
anyelement_out anyenum_in anyenum_out anynonarray_in anynonarray_out anyrange_in anyrange_out
anytextcat area areajoinsel areasel armor array_agg array_agg_finalfn
array_agg_transfn array_append array_cat array_dims array_eq array_fill array_ge array_positions
array_gt array_in array_larger array_le array_length array_lower array_lt array_position
array_ndims array_ne array_out array_prepend array_recv array_remove array_replace array_send array_smaller
array_to_json array_to_string array_typanalyze array_upper arraycontained arraycontains arraycontjoinsel
arraycontsel arrayoverlap ascii_to_mic ascii_to_utf8 asin atan atan2
avg big5_to_euc_tw big5_to_mic big5_to_utf8 bit bit_and bit_in
bit_or bit_out bit_recv bit_send bitand bitcat bitcmp
biteq bitge bitgt bitle bitlt bitne bitnot
bitor bitshiftleft bitshiftright bittypmodin bittypmodout bitxor bool
bool_and bool_or booland_statefunc boolean booleq boolge boolgt boolin
boolle boollt boolne boolor_statefunc boolout boolrecv boolsend
box box_above box_above_eq box_add box_below box_below_eq box_center
box_contain box_contain_pt box_contained box_distance box_div box_eq box_ge
box_gt box_in box_intersect box_le box_left box_lt box_mul
box_out box_overabove box_overbelow box_overlap box_overleft box_overright box_recv
box_right box_same box_send box_sub bpchar bpchar_larger bpchar_pattern_ge
bpchar_pattern_gt bpchar_pattern_le bpchar_pattern_lt bpchar_smaller bpcharcmp bpchareq bpcharge
bpchargt bpchariclike bpcharicnlike bpcharicregexeq bpcharicregexne bpcharin bpcharle
bpcharlike bpcharlt bpcharne bpcharnlike bpcharout bpcharrecv bpcharregexeq
bpcharregexne bpcharsend bpchartypmodin bpchartypmodout broadcast btabstimecmp btarraycmp
btbeginscan btboolcmp btbpchar_pattern_cmp btbuild btbuildempty btbulkdelete btcanreturn
btcharcmp btcostestimate btendscan btfloat48cmp btfloat4cmp btfloat4sortsupport btfloat84cmp
btfloat8cmp btfloat8sortsupport btgetbitmap btgettuple btinsert btint24cmp btint28cmp
btint2cmp btint2sortsupport btint42cmp btint48cmp btint4cmp btint4sortsupport btint82cmp
btint84cmp btint8cmp btint8sortsupport btmarkpos btnamecmp btnamesortsupport btoidcmp
btoidsortsupport btoidvectorcmp btoptions btrecordcmp btreltimecmp btrescan btrestrpos
bttext_pattern_cmp bttextcmp bttidcmp bttintervalcmp btvacuumcleanup bytea_string_agg_finalfn
bytea_string_agg_transfn byteacat byteacmp byteaeq byteage byteagt byteain byteale
bytealike bytealt byteane byteanlike byteaout bytearecv byteasend
cash_cmp cash_div_cash cash_div_flt4 cash_div_flt8 cash_div_int2 cash_div_int4 cash_eq
cash_ge cash_gt cash_in cash_le cash_lt cash_mi cash_mul_flt4
cash_mul_flt8 cash_mul_int2 cash_mul_int4 cash_ne cash_out cash_pl cash_recv
cash_send cash_words cashlarger cashsmaller cbrt ceil ceiling
center char chareq charge chargt charin charle
charlt charne charout charrecv charsend cideq cidin
cidout cidr cidr_in cidr_out cidr_recv cidr_send cidrecv
cidsend circle circle_above circle_add_pt circle_below circle_center circle_contain
circle_contain_pt circle_contained circle_distance circle_div_pt circle_eq circle_ge circle_gt
circle_in circle_le circle_left circle_lt circle_mul_pt circle_ne circle_out
circle_overabove circle_overbelow circle_overlap circle_overleft circle_overright circle_recv circle_right
circle_same circle_send circle_sub_pt clock_timestamp close_lb close_ls close_lseg
close_pb close_pl close_ps close_sb close_sl col_description concat
concat_ws contjoinsel contsel convert_from convert_to corr cos
cot covar_pop covar_samp crypt cstring_in cstring_out cstring_recv
cstring_send cume_dist current_database current_query current_schema current_schemas current_setting
current_user currtid currtid2 currval date date_cmp date_cmp_timestamp date_cmp_timestamptz date_eq
date_eq_timestamp date_eq_timestamptz date_ge date_ge_timestamp date_ge_timestamptz date_gt date_gt_timestamp
date_gt_timestamptz date_in date_larger date_le date_le_timestamp date_le_timestamptz date_lt
date_lt_timestamp date_lt_timestamptz date_mi date_mi_interval date_mii date_ne date_ne_timestamp
date_ne_timestamptz date_out date_pl_interval date_pli date_recv date_send date_smaller
date_sortsupport daterange daterange_canonical daterange_subdiff datetime_pl datetimetz_pl
dblink_connect_u dblink_connect dblink_disconnect dblink_exec dblink_open dblink_fetch dblink_close
dblink_get_connections dblink_error_message dblink_send_query dblink_is_busy dblink_get_notify
dblink_get_result dblink_cancel_query dblink_get_pkey dblink_build_sql_insert dblink_build_sql_delete
dblink_build_sql_update dblink dcbrt dearmor decrypt decrypt_iv degrees dense_rank dexp diagonal
decimal diameter digest dispell_init dispell_lexize dist_cpoly dist_lb dist_pb
dist_pc dist_pl dist_ppath dist_ps dist_sb dist_sl div
dlog1 dlog10 domain_in domain_recv dpow dround dsimple_init
dsimple_lexize dsnowball_init dsnowball_lexize dsqrt dsynonym_init dsynonym_lexize dtrunc
elem_contained_by_range encrypt encrypt_iv enum_cmp enum_eq enum_first enum_ge
enum_gt enum_in enum_larger enum_last enum_le enum_lt enum_ne
enum_out enum_range enum_recv enum_send enum_smaller eqjoinsel eqsel
euc_cn_to_mic euc_cn_to_utf8 euc_jis_2004_to_shift_jis_2004 euc_jis_2004_to_utf8
euc_jp_to_mic euc_jp_to_sjis euc_jp_to_utf8
euc_kr_to_mic euc_kr_to_utf8 euc_tw_to_big5 euc_tw_to_mic euc_tw_to_utf8 every exp
factorial family fdw_handler_in fdw_handler_out first_value float4 float48div
float48eq float48ge float48gt float48le float48lt float48mi float48mul
float48ne float48pl float4_accum float4abs float4div float4eq float4ge
float4gt float4in float4larger float4le float4lt float4mi float4mul
float4ne float4out float4pl float4recv float4send float4smaller float4um
float4up float8 float84div float84eq float84ge float84gt float84le
float84lt float84mi float84mul float84ne float84pl float8_accum float8_avg
float8_combine float8_regr_combine float8_corr float8_covar_pop float8_covar_samp
float8_regr_accum float8_regr_avgx float8_regr_avgy float8_regr_intercept
float8_regr_r2 float8_regr_slope float8_regr_sxx float8_regr_sxy float8_regr_syy
float8_stddev_pop float8_stddev_samp
float8_var_pop float8_var_samp float8abs float8div float8eq float8ge float8gt
float8in float8larger float8le float8lt float8mi float8mul float8ne
float8out float8pl float8recv float8send float8smaller float8um float8up
floor flt4_mul_cash flt8_mul_cash fmgr_c_validator fmgr_internal_validator fmgr_sql_validator format
format_type gb18030_to_utf8 gbk_to_utf8 gen_random_bytes gen_salt generate_series generate_subscripts
geometry get_current_ts_config getdatabaseencoding getpgusername gin_cmp_prefix gin_cmp_tslexeme
gin_extract_tsquery gin_extract_tsvector gin_tsquery_consistent ginarrayconsistent ginarrayextract
ginbeginscan ginbuild ginbuildempty ginbulkdelete gincostestimate ginendscan gingetbitmap gininsert
ginmarkpos ginoptions ginqueryarrayextract ginrescan ginrestrpos ginvacuumcleanup gist_box_compress
gist_box_consistent gist_box_decompress gist_box_penalty gist_box_picksplit gist_box_same gist_box_union
gist_circle_compress gist_circle_consistent gist_point_compress gist_point_consistent gist_point_distance
gist_poly_compress gist_poly_consistent gistbeginscan gistbuild gistbuildempty gistbulkdelete
gistcostestimate gistendscan gistgetbitmap gistgettuple gistinsert gistmarkpos gistoptions gistrescan
gistrestrpos gistvacuumcleanup gtsquery_compress gtsquery_consistent gtsquery_decompress gtsquery_penalty
gtsquery_picksplit gtsquery_same gtsquery_union gtsvector_compress gtsvector_consistent gtsvector_decompress
gtsvector_penalty gtsvector_picksplit gtsvector_same gtsvector_union gtsvectorin gtsvectorout
has_any_column_privilege has_column_privilege has_database_privilege has_foreign_data_wrapper_privilege
has_function_privilege has_language_privilege has_schema_privilege has_sequence_privilege has_server_privilege
has_table_privilege has_tablespace_privilege has_type_privilege hash_aclitem hash_array hash_numeric hash_range
hashbeginscan hashbpchar hashbuild hashbuildempty hashbulkdelete hashchar hashcostestimate hash_aclitem_extended
hashendscan hashenum hashfloat4 hashfloat8 hashgetbitmap hashgettuple hashinet hashinsert hashint2
hashint2extended hashint2vector hashint4 hashint4extended hashint8 hashint8extended hashmacaddr
hashfloat4extended hashfloat8extended hashcharextended hashoidextended hashnameextended hashmarkpos
hashoidvectorextended hashmacaddrextended hashinetextended hashname hashoid hashoidvector hashoptions
hash_numeric_extended hashmacaddr8extended hash_array_extended hashrescan hashrestrpos hashtext
hashbpcharextended time_hash_extended timetz_hash_extended interval_hash_extended timestamp_hash_extended
uuid_hash_extended pg_lsn_hash_extended hashenumextended jsonb_hash_extended hash_range_extended
hashtextextended hashvacuumcleanup hashvarlena height hmac host hostmask iclikejoinsel
iclikesel icnlikejoinsel icnlikesel icregexeqjoinsel icregexeqsel icregexnejoinsel icregexnesel
inet_client_addr inet_client_port inet_in inet_out inet_recv inet_send inet_server_addr
inet_server_port inetand inetmi inetmi_int8 inetnot inetor inetpl
int int2 int24div int24eq int24ge int24gt int24le int24lt integer
int24mi int24mul int24ne int24pl int28div int28eq int28ge
int28gt int28le int28lt int28mi int28mul int28ne int28pl
int2_accum int2_avg_accum int2_mul_cash int2_sum int2abs int2and int2div
int2eq int2ge int2gt int2in int2larger int2le int2lt
int2mi int2mod int2mul int2ne int2not int2or int2out
int2pl int2recv int2send int2shl int2shr int2smaller int2um
int2up int2vectoreq int2vectorin int2vectorout int2vectorrecv int2vectorsend int2xor
int4 int42div int42eq int42ge int42gt int42le int42lt
int42mi int42mul int42ne int42pl int48div int48eq int48ge
int48gt int48le int48lt int48mi int48mul int48ne int48pl
int4_accum int4_avg_accum int4_mul_cash int4_sum int4abs int4and int4div
int4eq int4ge int4gt int4in int4inc int4larger int4le
int4lt int4mi int4mod int4mul int4ne int4not int4or
int4out int4pl int4range int4range_canonical int4range_subdiff int4recv int4send
int4shl int4shr int4smaller int4um int4up int4xor int8
int82div int82eq int82ge int82gt int82le int82lt int82mi
int82mul int82ne int82pl int84div int84eq int84ge int84gt
int84le int84lt int84mi int84mul int84ne int84pl int8_accum
int8_avg int8_avg_accum int8_sum int8abs int8and int8div int8eq
int8ge int8gt int8in int8inc int8inc_any int8inc_float8_float8 int8larger
int8le int8lt int8mi int8mod int8mul int8ne int8not
int8or int8out int8pl int8pl_inet int8range int8range_canonical int8range_subdiff
int8recv int8send int8shl int8shr int8smaller int8um int8up
int8xor integer_pl_date inter_lb inter_sb inter_sl internal_in internal_out
interval_accum interval_avg interval_cmp interval_div interval_eq interval_ge interval_gt
interval_hash interval_in interval_larger interval_le interval_lt interval_mi interval_mul
interval_ne interval_out interval_pl interval_pl_date interval_pl_time interval_pl_timestamp interval_pl_timestamptz
interval_pl_timetz interval_recv interval_send interval_smaller interval_transform interval_um intervaltypmodin
intervaltypmodout intinterval isclosed isempty ishorizontal iso8859_1_to_utf8 iso8859_to_utf8
iso_to_koi8r iso_to_mic iso_to_win1251 iso_to_win866 isopen isparallel isperp
isvertical johab_to_utf8 json_agg jsonb_agg json_array_elements jsonb_array_elements
json_array_elements_text jsonb_array_elements_text json_to_tsvector jsonb_insert
json_array_length jsonb_array_length json_build_array json_build_object json_each jsonb_each json_each_text
jsonb_each_text json_extract_path jsonb_extract_path json_extract_path_text jsonb_extract_path_text json_in
json_object json_object_agg jsonb_object_agg json_object_keys jsonb_object_keys json_out json_populate_record
jsonb_populate_record json_populate_recordset jsonb_pretty jsonb_populate_recordset json_recv json_send
jsonb_set json_typeof jsonb_typeof json_to_record jsonb_to_record json_to_recordset jsonb_to_recordset
justify_interval koi8r_to_iso koi8r_to_mic koi8r_to_utf8 koi8r_to_win1251 koi8r_to_win866 koi8u_to_utf8
jsonb_path_query jsonb_build_object jsonb_object jsonb_build_array jsonb_path_match jsonb_path_exists
lag language_handler_in language_handler_out last_value lastval latin1_to_mic latin2_to_mic latin2_to_win1250
latin3_to_mic latin4_to_mic lead like_escape likejoinsel jsonb_path_query_first jsonb_path_query_array
likesel line line_distance line_eq line_horizontal line_in line_interpt
line_intersect line_out line_parallel line_perp line_recv line_send line_vertical
ln lo_close lo_creat lo_create lo_export lo_import lo_lseek lo_compat lo_from_bytea lo_get lo_import_with_oid
lo_open lo_tell lo_truncate lo_unlink log lo_read lower_inc lo_seek64 lo_put lo_tell64 lo_truncate64 lo_write
lower_inf lowrite lseg lseg_center lseg_distance lseg_eq lseg_ge
lseg_gt lseg_horizontal lseg_in lseg_interpt lseg_intersect lseg_le lseg_length
lseg_lt lseg_ne lseg_out lseg_parallel lseg_perp lseg_recv lseg_send
lseg_vertical macaddr_and macaddr_cmp macaddr_eq macaddr_ge macaddr_gt macaddr_in
macaddr_le macaddr_lt macaddr_ne macaddr_not macaddr_or macaddr_out macaddr_recv
macaddr_send makeaclitem make_interval make_tsrange masklen max mic_to_ascii mic_to_big5 mic_to_euc_cn
mic_to_euc_jp mic_to_euc_kr mic_to_euc_tw mic_to_iso mic_to_koi8r mic_to_latin1 mic_to_latin2
mic_to_latin3 mic_to_latin4 mic_to_sjis mic_to_win1250 mic_to_win1251 mic_to_win866 min
mktinterval mode mod money mul_d_interval name nameeq namege make_timestamptz make_timestamp
namegt nameiclike nameicnlike nameicregexeq nameicregexne namein namele make_time make_date
namelike namelt namene namenlike nameout namerecv nameregexeq make_interval
nameregexne namesend neqjoinsel neqsel netmask network network_cmp
network_eq network_ge network_gt network_le network_lt network_ne network_sub
network_subeq network_sup network_supeq nextval nlikejoinsel nlikesel notlike
npoints nth_value ntile numeric numeric_abs numeric_accum numeric_add
numeric_avg numeric_avg_accum numeric_cmp numeric_div numeric_div_trunc numeric_eq numeric_exp
numeric_fac numeric_ge numeric_gt numeric_in numeric_inc numeric_larger numeric_le
numeric_ln numeric_log numeric_lt numeric_mod numeric_mul numeric_ne numeric_out
numeric_power numeric_recv numeric_send numeric_smaller numeric_sqrt numeric_stddev_pop numeric_stddev_samp
numeric_sub numeric_transform numeric_uminus numeric_uplus numeric_var_pop numeric_var_samp numerictypmodin
numerictypmodout numnode numrange numrange_subdiff obj_description oid oideq
oidge oidgt oidin oidlarger oidle oidlt oidne
oidout oidrecv oidsend oidsmaller oidvectoreq oidvectorge oidvectorgt
oidvectorin oidvectorle oidvectorlt oidvectorne oidvectorout oidvectorrecv oidvectorsend
oidvectortypes on_pb on_pl on_ppath on_ps on_sb on_sl
opaque_in opaque_out overlaps path path_add path_add_pt path_center
path_contain_pt path_distance path_div_pt path_in path_inter path_length path_mul_pt
path_n_eq path_n_ge path_n_gt path_n_le path_n_lt path_npoints path_out parse_ident
path_recv path_send path_sub_pt pclose percent_rank percentile_cont percentile_disc
pg_advisory_lock pg_advisory_lock_shared pg_advisory_unlock pg_advisory_unlock_all pg_advisory_unlock_shared
pg_advisory_xact_lock pg_advisory_xact_lock_shared pg_available_extension_versions pg_available_extensions
pg_backend_pid pg_cancel_backend pg_char_to_encoding pg_collation_for pg_collation_is_visible pg_column_size
pg_conf_load_time pg_conversion_is_visible pg_create_restore_point pg_current_xlog_insert_location
pg_current_xlog_location pg_cursor pg_database_size pg_describe_object pg_encoding_max_length
pg_encoding_to_char pg_export_snapshot pg_extension_config_dump pg_extension_update_paths pg_function_is_visible
pg_get_constraintdef pg_get_expr pg_get_function_arguments pg_filenode_relation pg_indexam_has_property
pg_get_function_identity_arguments pg_get_function_result pg_get_functiondef pg_get_indexdef pg_get_keywords
pg_get_ruledef pg_get_serial_sequence pg_get_triggerdef pg_get_userbyid pg_get_viewdef pg_has_role
pg_indexes_size pg_is_in_recovery pg_is_other_temp_schema pg_is_xlog_replay_paused pg_last_xact_replay_timestamp
pg_last_xlog_receive_location pg_last_xlog_replay_location pg_listening_channels pg_lock_status pg_ls_dir
pg_my_temp_schema pg_node_tree_in pg_node_tree_out pg_node_tree_recv pg_node_tree_send pg_notify
pg_opclass_is_visible pg_operator_is_visible pg_opfamily_is_visible pg_options_to_table pg_index_has_property
pg_postmaster_start_time pg_prepared_statement pg_prepared_xact pg_read_binary_file pg_read_file
pg_relation_filenode pg_relation_filepath pg_relation_size pg_reload_conf pg_rotate_logfile
pg_sequence_parameters pg_show_all_settings pg_size_pretty pg_sleep pg_start_backup pg_index_column_has_property
pg_stat_clear_snapshot pg_stat_file pg_stat_get_activity pg_stat_get_analyze_count pg_stat_get_autoanalyze_count
pg_stat_get_autovacuum_count pg_get_object_address pg_identify_object_as_address pg_stat_get_backend_activity
pg_stat_get_backend_activity_start pg_stat_get_backend_client_addr pg_stat_get_backend_client_port
pg_stat_get_backend_dbid pg_stat_get_backend_idset pg_stat_get_backend_pid pg_stat_get_backend_start
pg_stat_get_backend_userid pg_stat_get_backend_waiting pg_stat_get_backend_xact_start
pg_stat_get_bgwriter_buf_written_checkpoints pg_stat_get_bgwriter_buf_written_clean
pg_stat_get_bgwriter_maxwritten_clean pg_stat_get_bgwriter_requested_checkpoints
pg_stat_get_bgwriter_stat_reset_time pg_stat_get_bgwriter_timed_checkpoints pg_stat_get_blocks_fetched
pg_stat_get_blocks_hit pg_stat_get_buf_alloc pg_stat_get_buf_fsync_backend pg_stat_get_buf_written_backend
pg_stat_get_checkpoint_sync_time pg_stat_get_checkpoint_write_time pg_stat_get_db_blk_read_time
pg_stat_get_db_blk_write_time pg_stat_get_db_blocks_fetched pg_stat_get_db_blocks_hit
pg_stat_get_db_conflict_all pg_stat_get_db_conflict_bufferpin pg_stat_get_db_conflict_lock
pg_stat_get_db_conflict_snapshot pg_stat_get_db_conflict_startup_deadlock pg_stat_get_db_conflict_tablespace
pg_stat_get_db_deadlocks pg_stat_get_db_numbackends pg_stat_get_db_stat_reset_time pg_stat_get_db_temp_bytes
pg_stat_get_db_temp_files pg_stat_get_db_tuples_deleted pg_stat_get_db_tuples_fetched
pg_stat_get_db_tuples_inserted pg_stat_get_db_tuples_returned pg_stat_get_db_tuples_updated
pg_stat_get_db_xact_commit pg_stat_get_db_xact_rollback pg_stat_get_dead_tuples pg_stat_get_function_calls
pg_stat_get_function_self_time pg_stat_get_function_total_time pg_stat_get_last_analyze_time
pg_stat_get_last_autoanalyze_time pg_stat_get_last_autovacuum_time pg_stat_get_last_vacuum_time
pg_stat_get_live_tuples pg_stat_get_numscans pg_stat_get_tuples_deleted pg_stat_get_tuples_fetched
pg_stat_get_tuples_hot_updated pg_stat_get_tuples_inserted pg_stat_get_tuples_returned
pg_stat_get_tuples_updated pg_stat_get_vacuum_count pg_stat_get_wal_senders pg_stat_get_xact_blocks_fetched
pg_stat_get_xact_blocks_hit pg_stat_get_xact_function_calls pg_stat_get_xact_function_self_time
pg_stat_get_xact_function_total_time pg_stat_get_xact_numscans pg_stat_get_xact_tuples_deleted
pg_stat_get_xact_tuples_fetched pg_stat_get_xact_tuples_hot_updated pg_stat_get_xact_tuples_inserted
pg_stat_get_xact_tuples_returned pg_stat_get_xact_tuples_updated pg_stat_reset pg_stat_reset_shared
pg_stat_reset_single_function_counters pg_stat_reset_single_table_counters pg_stop_backup pg_switch_xlog
pg_table_is_visible pg_table_size pg_tablespace_databases pg_tablespace_location pg_tablespace_size
pg_terminate_backend pg_timezone_abbrevs pg_timezone_names pg_total_relation_size pg_trigger_depth
pg_try_advisory_lock pg_try_advisory_lock_shared pg_try_advisory_xact_lock pg_try_advisory_xact_lock_shared
pg_ts_config_is_visible pg_ts_dict_is_visible pg_ts_parser_is_visible pg_ts_template_is_visible
pg_type_is_visible pg_typeof pg_xact_commit_timestamp pg_last_committed_xact pg_xlog_location_diff
pg_xlog_replay_pause pg_xlog_replay_resume pg_xlogfile_name pg_xlogfile_name_offset pgp_key_id pgp_pub_decrypt
pgp_pub_decrypt_bytea pgp_pub_encrypt pgp_pub_encrypt_bytea pgp_sym_decrypt pgp_sym_decrypt_bytea
pgp_sym_encrypt pgp_sym_encrypt_bytea pi plainto_tsquery plpgsql_call_handler plpgsql_inline_handler
plpgsql_validator point point_above point_add point_below point_distance point_div point_eq
point_horiz point_in point_left point_mul point_ne point_out point_recv
point_right point_send point_sub point_vert poly_above poly_below poly_center
poly_contain poly_contain_pt poly_contained poly_distance poly_in poly_left poly_npoints
poly_out poly_overabove poly_overbelow poly_overlap poly_overleft poly_overright poly_recv
poly_right poly_same poly_send polygon popen positionjoinsel positionsel
postgresql_fdw_validator pow power prsd_end prsd_headline prsd_lextype prsd_nexttoken
prsd_start pt_contained_circle pt_contained_poly querytree
quote_nullable radians radius random range_adjacent range_after range_before
range_cmp range_contained_by range_contains range_contains_elem range_eq range_ge range_gist_compress
range_gist_consistent range_gist_decompress range_gist_penalty range_gist_picksplit range_gist_same
range_gist_union range_gt range_merge
range_in range_intersect range_le range_lt range_minus range_ne range_out
range_overlaps range_overleft range_overright range_recv range_send range_typanalyze range_union
rank record_eq record_ge record_gt record_in record_le record_lt regexp_match
record_ne record_out record_recv record_send regclass regclassin regclassout
regclassrecv regclasssend regconfigin regconfigout regconfigrecv regconfigsend regdictionaryin
regdictionaryout regdictionaryrecv regdictionarysend regexeqjoinsel regexeqsel regexnejoinsel regexnesel
regexp_matches regexp_replace regexp_split_to_array regexp_split_to_table regoperatorin regoperatorout regoperatorrecv
regoperatorsend regoperin regoperout regoperrecv regopersend regprocedurein regprocedureout
regprocedurerecv regproceduresend regprocin regprocout regprocrecv regprocsend regr_avgx
regr_avgy regr_count regr_intercept regr_r2 regr_slope regr_sxx regr_sxy
regr_syy regtypein regtypeout regtyperecv regtypesend reltime reltimeeq
reltimege reltimegt reltimein reltimele reltimelt reltimene reltimeout
reltimerecv reltimesend reverse round row_number row_to_json
scalargtjoinsel scalargtsel scalarltjoinsel scalarltsel
session_user set_config set_masklen setseed setval setweight shell_in
shell_out shift_jis_2004_to_euc_jis_2004 shift_jis_2004_to_utf8 shobj_description sign similar_escape sin
sjis_to_euc_jp sjis_to_mic sjis_to_utf8 slope smgreq smgrin smgrne
smgrout spg_kd_choose spg_kd_config spg_kd_inner_consistent spg_kd_picksplit spg_quad_choose spg_quad_config
spg_quad_inner_consistent spg_quad_leaf_consistent spg_quad_picksplit spg_text_choose spg_text_config spg_text_inner_consistent spg_text_leaf_consistent
spg_text_picksplit spgbeginscan spgbuild spgbuildempty spgbulkdelete spgcanreturn spgcostestimate
spgendscan spggetbitmap spggettuple spginsert spgmarkpos spgoptions spgrescan
spgrestrpos spgvacuumcleanup sqrt statement_timestamp stddev stddev_pop stddev_samp
string_agg string_agg_finalfn string_agg_transfn string_to_array strip sum
tan text text_ge text_gt text_larger
text_le text_lt text_pattern_ge text_pattern_gt text_pattern_le text_pattern_lt text_smaller
textanycat textcat texteq texticlike texticnlike texticregexeq texticregexne
textin textlen textlike textne textnlike textout textrecv
textregexeq textregexne textsend thesaurus_init thesaurus_lexize tideq tidge
tidgt tidin tidlarger tidle tidlt tidne tidout
tidrecv tidsend tidsmaller time time_cmp time_eq time_ge
time_gt time_hash time_in time_larger time_le time_lt time_mi_interval
time_mi_time time_ne time_out time_pl_interval time_recv time_send time_smaller
time_transform timedate_pl timemi timenow timepl timestamp timestamp_cmp
timestamp_cmp_date timestamp_cmp_timestamptz timestamp_eq timestamp_eq_date timestamp_eq_timestamptz timestamp_ge timestamp_ge_date
timestamp_ge_timestamptz timestamp_gt timestamp_gt_date timestamp_gt_timestamptz timestamp_hash timestamp_in timestamp_larger
timestamp_le timestamp_le_date timestamp_le_timestamptz timestamp_lt timestamp_lt_date timestamp_lt_timestamptz timestamp_mi
timestamp_mi_interval timestamp_ne timestamp_ne_date timestamp_ne_timestamptz timestamp_out timestamp_pl_interval timestamp_recv
timestamp_send timestamp_smaller timestamp_sortsupport timestamp_transform timestamptypmodin timestamptypmodout timestamptz
timestamptz_cmp timestamptz_cmp_date timestamptz_cmp_timestamp timestamptz_eq timestamptz_eq_date timestamptz_eq_timestamp timestamptz_ge
timestamptz_ge_date timestamptz_ge_timestamp timestamptz_gt timestamptz_gt_date timestamptz_gt_timestamp timestamptz_in timestamptz_larger
timestamptz_le timestamptz_le_date timestamptz_le_timestamp timestamptz_lt timestamptz_lt_date timestamptz_lt_timestamp timestamptz_mi
timestamptz_mi_interval timestamptz_ne timestamptz_ne_date timestamptz_ne_timestamp timestamptz_out timestamptz_pl_interval timestamptz_recv
timestamptz_send timestamptz_smaller timestamptztypmodin timestamptztypmodout timetypmodin timetypmodout timetz
timetz_cmp timetz_eq timetz_ge timetz_gt timetz_hash timetz_in timetz_larger
timetz_le timetz_lt timetz_mi_interval timetz_ne timetz_out timetz_pl_interval timetz_recv
timetz_send timetz_smaller timetzdate_pl timetztypmodin timetztypmodout timezone tinterval
tintervalct tintervalend tintervaleq tintervalge tintervalgt tintervalin tintervalle
tintervalleneq tintervallenge tintervallengt tintervallenle tintervallenlt tintervallenne tintervallt
tintervalne tintervalout tintervalov tintervalrecv tintervalrel tintervalsame tintervalsend
tintervalstart to_json to_tsquery to_tsvector transaction_timestamp trigger_out trunc ts_debug
ts_headline ts_lexize ts_match_qv ts_match_tq ts_match_tt ts_match_vq ts_parse ts_delete ts_filter
ts_rank ts_rank_cd ts_rewrite ts_stat ts_token_type ts_typanalyze tsmatchjoinsel tsquery_phrase
tsmatchsel tsq_mcontained tsq_mcontains tsquery_and tsquery_cmp tsquery_eq tsquery_ge
tsquery_gt tsquery_le tsquery_lt tsquery_ne tsquery_not tsquery_or tsqueryin websearch_to_tsquery
tsqueryout tsqueryrecv tsquerysend tsrange tsrange_subdiff tstzrange tstzrange_subdiff phraseto_tsquery
tsvector_cmp tsvector_concat tsvector_eq tsvector_ge tsvector_gt tsvector_le tsvector_lt
tsvector_ne tsvectorin tsvectorout tsvectorrecv tsvectorsend txid_current txid_current_snapshot
txid_snapshot_in txid_snapshot_out txid_snapshot_recv txid_snapshot_send txid_snapshot_xip
txid_snapshot_xmax txid_snapshot_xmin
txid_visible_in_snapshot uhc_to_utf8 unknownin unknownout unknownrecv unknownsend unnest
upper_inc upper_inf utf8_to_ascii utf8_to_big5 utf8_to_euc_cn utf8_to_euc_jis_2004 utf8_to_euc_jp
utf8_to_euc_kr utf8_to_euc_tw utf8_to_gb18030 utf8_to_gbk utf8_to_iso8859 utf8_to_iso8859_1 utf8_to_johab
utf8_to_koi8r utf8_to_koi8u utf8_to_shift_jis_2004 utf8_to_sjis utf8_to_uhc utf8_to_win uuid_cmp
uuid_eq uuid_ge uuid_gt uuid_hash uuid_in uuid_le uuid_lt
uuid_ne uuid_out uuid_recv uuid_send var_pop var_samp varbit
varbit_in varbit_out varbit_recv varbit_send varbit_transform varbitcmp varbiteq
varbitge varbitgt varbitle varbitlt varbitne varbittypmodin varbittypmodout
varchar varying varchar_transform varcharin varcharout varcharrecv varcharsend varchartypmodin
varchartypmodout variance version void_in void_out void_recv void_send
width width_bucket win1250_to_latin2 win1250_to_mic win1251_to_iso win1251_to_koi8r win1251_to_mic
win1251_to_win866 win866_to_iso win866_to_koi8r win866_to_mic win866_to_win1251 win_to_utf8 xideq
xideqint4 xidin xidout xidrecv xidsend xml xml_in xmlcomment xpath xpath_exists table_to_xmlschema
query_to_xmlschema cursor_to_xmlschema table_to_xml_and_xmlschema query_to_xml_and_xmlschema
schema_to_xml schema_to_xmlschema schema_to_xml_and_xmlschema database_to_xml database_to_xmlschema xmlroot
database_to_xml_and_xmlschema table_to_xml query_to_xmlcursor_to_xml xmlcomment xmlconcat xmlelement xmlforest
xml_is_well_formed_content xml_is_well_formed_document xml_is_well_formed xml_out xml_recv xml_send xmlagg
xmlpi query_to_xml cursor_to_xml xmlserialize xmltable
);
my @copy_keywords = ( 'STDIN', 'STDOUT' );
my %symbols = (
'=' => '=', '<' => '<', '>' => '>', '|' => '|', ',' => ',', '.' => '.', '+' => '+', '-' => '-',
'*' => '*', '/' => '/', '!=' => '!=', '%' => '%', '<=' => '<=', '>=' => '>=', '<>' => '<>'
);
my @brackets = ( '(', ')' );
# All setting and modification of dicts is done, can set them now to $self->{'dict'}->{...}
$self->{ 'dict' }->{ 'pg_keywords' } = \@pg_keywords;
$self->{ 'dict' }->{ 'pg_types' } = \@pg_types;
$self->{ 'dict' }->{ 'sql_keywords' } = \@sql_keywords;
$self->{ 'dict' }->{ 'redshift_keywords' } = \@redshift_keywords;
$self->{ 'dict' }->{ 'pg_functions' } = ();
map { $self->{ 'dict' }->{ 'pg_functions' }{$_} = ''; } @pg_functions;
$self->{ 'dict' }->{ 'copy_keywords' } = \@copy_keywords;
$self->{ 'dict' }->{ 'symbols' } = \%symbols;
$self->{ 'dict' }->{ 'brackets' } = \@brackets;
return;
}
=head2 _remove_dynamic_code
Internal function used to hide dynamic code in plpgsql to the parser.
The original values are restored with function _restore_dynamic_code().
=cut
sub _remove_dynamic_code
{
my ($self, $str, $code_sep) = @_;
my @dynsep = ();
push(@dynsep, $code_sep) if ($code_sep && $code_sep ne "'");
# Try to auto detect the string separator if none are provided.
# Note that default single quote separtor is natively supported.
if ($#dynsep == -1)
{
# if a dollar sign is found after EXECUTE then the following string
# until an other dollar is found will be understand as a text delimiter
@dynsep = $$str =~ /EXECUTE\s+(\$[^\$\s]*\$)/igs;
}
foreach my $sep (@dynsep)
{
while ($$str =~ s/(\Q$sep\E.*?\Q$sep\E)/TEXTVALUE$self->{idx_code}/s)
{
$self->{dynamic_code}{$self->{idx_code}} = $1;
$self->{idx_code}++;
}
}
# Replace any COMMENT constant between single quote
while ($$str =~ s/IS\s+('(?:.*?)')\s*;/IS TEXTVALUE$self->{idx_code};/s)
{
$self->{dynamic_code}{$self->{idx_code}} = $1;
$self->{idx_code}++;
}
# keep untouched parts between double single quotes
while ($$str =~ s/(PGFESCQ1(?:[^\r\n\|;]*?)PGFESCQ1)/TEXTVALUE$self->{idx_code}/s)
{
$self->{dynamic_code}{$self->{idx_code}} = $1;
$self->{idx_code}++;
}
}
=head2 _restore_dynamic_code
Internal function used to restore plpgsql dynamic code in plpgsql
that was removed by the _remove_dynamic_code() method.
=cut
sub _restore_dynamic_code
{
my ($self, $str) = @_;
$$str =~ s/TEXTVALUE(\d+)/$self->{dynamic_code}{$1}/gs;
}
=head2 _quote_operator
Internal function used to quote operator with multiple character
to be tokenized as a single word.
The original values are restored with function _restore_operator().
=cut
sub _quote_operator
{
my ($self, $str) = @_;
my @lines = split(/[\n]/, $$str);
for (my $i = 0; $i <= $#lines; $i++)
{
if ($lines[$i] =~ s/((?:CREATE|DROP|ALTER)\s+OPERATOR\s+(?:IF\s+EXISTS)?)\s*((:?[a-z0-9]+\.)?[\+\-\*\/<>=\~\!\@\#\%\^\&\|\`\?]+)\s*/$1 "$2" /i) {
push(@{ $self->{operator} }, $2) if (!grep(/^\Q$2\E$/, @{ $self->{operator} }));
}
}
$$str = join("\n", @lines);
my $idx = 0;
while ($$str =~ s/(NEGATOR|COMMUTATOR)\s*=\s*([^,\)\s]+)/\U$1\E$idx/is) {
$self->{uc($1)}{$idx} = "$1 = $2";
$idx++;
}
}
=head2 _restore_operator
Internal function used to restore operator that was removed
by the _quote_operator() method.
=cut
sub _restore_operator
{
my ($self, $str) = @_;
foreach my $op (@{ $self->{operator} })
{
$$str =~ s/"$op"/$op/gs;
}
if (exists $self->{COMMUTATOR}) {
$$str =~ s/COMMUTATOR(\d+)/$self->{COMMUTATOR}{$1}/igs;
}
if (exists $self->{NEGATOR}) {
$$str =~ s/NEGATOR(\d+)/$self->{NEGATOR}{$1}/igs;
}
}
=head2 _quote_comment_stmt
Internal function used to replace constant in a COMMENT statement
to be tokenized as a single word.
The original values are restored with function _restore_comment_stmt().
=cut
sub _quote_comment_stmt
{
my ($self, $str) = @_;
my $idx = 0;
while ($$str =~ s/(COMMENT\s+ON\s+(?:.*?)\s+IS)\s+(\$[^;]+?\$)\s*;/$1 PGF_CMTSTR$idx;/is) {
$self->{comment_str}{$idx} = $2;
$idx++;
}
}
=head2 _restore_comment_stmt
Internal function used to restore comment string that was removed
by the _quote_comment_stmt() method.
=cut
sub _restore_comment_stmt
{
my ($self, $str) = @_;
if (exists $self->{comment_str}) {
$$str =~ s/PGF_CMTSTR(\d+)/$self->{comment_str}{$1}/igs;
}
}
=head2 _remove_comments
Internal function used to remove comments in SQL code
to simplify the work of the wrap_lines. Comments must be
restored with the _restore_comments() method.
=cut
sub _remove_comments
{
my $self = shift;
my $idx = 0;
while ($self->{ 'content' } =~ s/(\/\*(.*?)\*\/)/PGF_COMMENT${idx}A/s) {
$self->{'comments'}{"PGF_COMMENT${idx}A"} = $1;
$idx++;
}
my @lines = split(/\n/, $self->{ 'content' });
for (my $j = 0; $j <= $#lines; $j++)
{
$lines[$j] //= '';
# Extract multiline comments as a single placeholder
my $old_j = $j;
my $cmt = '';
while ($j <= $#lines && $lines[$j] =~ /^(\s*\-\-.*)$/)
{
$cmt .= "$1\n";
$j++;
}
if ( $j > $old_j )
{
chomp($cmt);
$lines[$old_j] =~ s/^(\s*\-\-.*)$/PGF_COMMENT${idx}A/;
$self->{'comments'}{"PGF_COMMENT${idx}A"} = $cmt;
$idx++;
$j--;
while ($j > $old_j)
{
delete $lines[$j];
$j--;
}
}
if ($lines[$j] =~ s/(\s*\-\-.*)$/PGF_COMMENT${idx}A/)
{
$self->{'comments'}{"PGF_COMMENT${idx}A"} = $1;
$idx++;
}
# Mysql supports differents kinds of comment's starter
if ( ($lines[$j] =~ s/(\s*COMMENT\s+'.*)$/PGF_COMMENT${idx}A/) ||
($lines[$j] =~ s/(\s*\# .*)$/PGF_COMMENT${idx}A/) )
{
$self->{'comments'}{"PGF_COMMENT${idx}A"} = $1;
# Normalize start of comment
$self->{'comments'}{"PGF_COMMENT${idx}A"} =~ s/^(\s*)COMMENT/$1\-\- /;
$self->{'comments'}{"PGF_COMMENT${idx}A"} =~ s/^(\s*)\#/$1\-\- /;
$idx++;
}
}
$self->{ 'content' } = join("\n", @lines);
# Remove extra newline after comment
while ($self->{ 'content' } =~ s/(PGF_COMMENT\d+A[\n])[\n]+/$1/s) {};
# Replace subsequent comment by a single one
while ($self->{ 'content' } =~ s/(PGF_COMMENT\d+A\s*PGF_COMMENT\d+A)/PGF_COMMENT${idx}A/s)
{
$self->{'comments'}{"PGF_COMMENT${idx}A"} = $1;
$idx++;
}
}
=head2 _restore_comments
Internal function used to restore comments in SQL code
that was removed by the _remove_comments() method.
=cut
sub _restore_comments
{
my ($self, $wrap_comment) = @_;
if ($self->{'wrap_limit'} && $wrap_comment)
{
foreach my $k (keys %{$self->{'comments'}})
{
if ($self->{'comments'}{$k} =~ /^(\s*)--[\r\n]/s)
{
next;
}
elsif ($self->{'comments'}{$k} =~ /^(\s*)--/)
{
my $indent = $1 || '';
if (length($self->{'comments'}{$k}) > $self->{'wrap_limit'} + ($self->{'wrap_limit'}*10/100))
{
my @data = split(/\n/, $self->{'comments'}{$k});
map { s/^\s*--//; } @data;
$self->{'comments'}{$k} = join("\n", @data);
$Text::Wrap::columns = $self->{'wrap_limit'};
my $t = wrap('', ' ', $self->{'comments'}{$k});
@data = split(/\n/, $t);
map { s/^/$indent--/; } @data;
$self->{'comments'}{$k} = join("\n", @data);
} else {
$self->{'comments'}{$k} =~ s/^\s*--//s;
$self->{'comments'}{$k} = $indent . "--$self->{'comments'}{$k}";
}
# remove extra spaces after comment characters
$self->{'comments'}{$k} =~ s/--[ ]+/-- /gs;
}
}
}
while ($self->{ 'content' } =~ s/(PGF_COMMENT\d+A)/$self->{'comments'}{$1}/s) { delete $self->{'comments'}{$1}; };
}
=head2 wrap_lines
Internal function used to wrap line at a certain length.
=cut
sub wrap_lines
{
my ($self, $wrap_comment) = @_;
return if (!$self->{'wrap_limit'} || !$self->{ 'content' });
$self->_remove_comments();
my @lines = split(/\n/, $self->{ 'content' });
$self->{ 'content' } = '';
foreach my $l (@lines)
{
# Remove and store the indentation of the line
my $indent = '';
if ($l =~ s/^(\s+)//) {
$indent = $1;
}
if (length($l) > $self->{'wrap_limit'} + ($self->{'wrap_limit'}*10/100))
{
$Text::Wrap::columns = $self->{'wrap_limit'};
my $t = wrap($indent, " "x$self->{ 'spaces' } . $indent, $l);
$self->{ 'content' } .= "$t\n";
} else {
$self->{ 'content' } .= $indent . "$l\n";
}
}
$self->_restore_comments($wrap_comment || $self->{ 'wrap_comment' }) if ($self->{ 'content' });
return;
}
sub _dump_var
{
my $self = shift;
foreach my $v (sort keys %{$self})
{
next if ($v !~ /^_/);
if ($self->{$v} =~ /ARRAY/) {
print STDERR "$v => (", join(',', @{$self->{$v}}), ")\n";
} else {
print STDERR "$v => $self->{$v}\n";
}
}
}
=head1 AUTHOR
pgFormatter is an original work from Gilles Darold
=head1 BUGS
Please report any bugs or feature requests to: https://github.com/darold/pgFormatter/issues
=head1 COPYRIGHT
Copyright 2012-2023 Gilles Darold. All rights reserved.
=head1 LICENSE
pgFormatter is free software distributed under the PostgreSQL Licence.
A modified version of the SQL::Beautify Perl Module is embedded in pgFormatter
with copyright (C) 2009 by Jonas Kramer and is published under the terms of
the Artistic License 2.0.
=cut
1;
}
__DATA__
WRFILE: jquery.jqplot.min.css
.jqplot-target{position:relative;color:#666;font-family:"Trebuchet MS",Arial,Helvetica,sans-serif;font-size:1em}.jqplot-axis{font-size:.75em}.jqplot-xaxis{margin-top:10px}.jqplot-x2axis{margin-bottom:10px}.jqplot-yaxis{margin-right:10px}.jqplot-y2axis,.jqplot-y3axis,.jqplot-y4axis,.jqplot-y5axis,.jqplot-y6axis,.jqplot-y7axis,.jqplot-y8axis,.jqplot-y9axis,.jqplot-yMidAxis{margin-left:10px;margin-right:10px}.jqplot-axis-tick,.jqplot-xaxis-tick,.jqplot-yaxis-tick,.jqplot-x2axis-tick,.jqplot-y2axis-tick,.jqplot-y3axis-tick,.jqplot-y4axis-tick,.jqplot-y5axis-tick,.jqplot-y6axis-tick,.jqplot-y7axis-tick,.jqplot-y8axis-tick,.jqplot-y9axis-tick,.jqplot-yMidAxis-tick{position:absolute;white-space:pre}.jqplot-xaxis-tick{top:0;left:15px;vertical-align:top}.jqplot-x2axis-tick{bottom:0;left:15px;vertical-align:bottom}.jqplot-yaxis-tick{right:0;top:15px;text-align:right}.jqplot-yaxis-tick.jqplot-breakTick{right:-20px;margin-right:0;padding:1px 5px 1px 5px;z-index:2;font-size:1.5em}.jqplot-y2axis-tick,.jqplot-y3axis-tick,.jqplot-y4axis-tick,.jqplot-y5axis-tick,.jqplot-y6axis-tick,.jqplot-y7axis-tick,.jqplot-y8axis-tick,.jqplot-y9axis-tick{left:0;top:15px;text-align:left}.jqplot-yMidAxis-tick{text-align:center;white-space:nowrap}.jqplot-xaxis-label{margin-top:10px;font-size:11pt;position:absolute}.jqplot-x2axis-label{margin-bottom:10px;font-size:11pt;position:absolute}.jqplot-yaxis-label{margin-right:10px;font-size:11pt;position:absolute}.jqplot-yMidAxis-label{font-size:11pt;position:absolute}.jqplot-y2axis-label,.jqplot-y3axis-label,.jqplot-y4axis-label,.jqplot-y5axis-label,.jqplot-y6axis-label,.jqplot-y7axis-label,.jqplot-y8axis-label,.jqplot-y9axis-label{font-size:11pt;margin-left:10px;position:absolute}.jqplot-meterGauge-tick{font-size:.75em;color:#999}.jqplot-meterGauge-label{font-size:1em;color:#999}table.jqplot-table-legend{margin-top:12px;margin-bottom:12px;margin-left:12px;margin-right:12px}table.jqplot-table-legend,table.jqplot-cursor-legend{background-color:rgba(255,255,255,0.6);border:1px solid #ccc;position:absolute;font-size:.75em}td.jqplot-table-legend{vertical-align:middle}td.jqplot-seriesToggle:hover,td.jqplot-seriesToggle:active{cursor:pointer}.jqplot-table-legend .jqplot-series-hidden{text-decoration:line-through}div.jqplot-table-legend-swatch-outline{border:1px solid #ccc;padding:1px}div.jqplot-table-legend-swatch{width:0;height:0;border-top-width:5px;border-bottom-width:5px;border-left-width:6px;border-right-width:6px;border-top-style:solid;border-bottom-style:solid;border-left-style:solid;border-right-style:solid}.jqplot-title{top:0;left:0;padding-bottom:.5em;font-size:1.2em}table.jqplot-cursor-tooltip{border:1px solid #ccc;font-size:.75em}.jqplot-cursor-tooltip{border:1px solid #ccc;font-size:.75em;white-space:nowrap;background:rgba(208,208,208,0.5);padding:1px}.jqplot-highlighter-tooltip,.jqplot-canvasOverlay-tooltip{border:1px solid #ccc;font-size:.75em;white-space:nowrap;background:rgba(208,208,208,0.5);padding:1px}.jqplot-point-label{font-size:.75em;z-index:2}td.jqplot-cursor-legend-swatch{vertical-align:middle;text-align:center}div.jqplot-cursor-legend-swatch{width:1.2em;height:.7em}.jqplot-error{text-align:center}.jqplot-error-message{position:relative;top:46%;display:inline-block}div.jqplot-bubble-label{font-size:.8em;padding-left:2px;padding-right:2px;color:rgb(20%,20%,20%)}div.jqplot-bubble-label.jqplot-bubble-label-highlight{background:rgba(90%,90%,90%,0.7)}div.jqplot-noData-container{text-align:center;background-color:rgba(96%,96%,96%,0.3)}
WRFILE: jquery.min.js
/*!
* jQuery JavaScript Library v1.9.1
* http://jquery.com/
*
* Includes Sizzle.js
* http://sizzlejs.com/
*
* Copyright 2005, 2012 jQuery Foundation, Inc. and other contributors
* Released under the MIT license
* http://jquery.org/license
*
* Date: 2013-2-4
*/
(function(a2,aG){var ai,w,aC=typeof aG,l=a2.document,aL=a2.location,bi=a2.jQuery,H=a2.$,aa={},a6=[],s="1.9.1",aI=a6.concat,ao=a6.push,a4=a6.slice,aM=a6.indexOf,z=aa.toString,V=aa.hasOwnProperty,aQ=s.trim,bJ=function(e,b3){return new bJ.fn.init(e,b3,w)},bA=/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source,ac=/\S+/g,C=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,br=/^(?:(<[\w\W]+>)[^>]*|#([\w-]*))$/,a=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,bh=/^[\],:{}\s]*$/,bk=/(?:^|:|,)(?:\s*\[)+/g,bG=/\\(?:["\\\/bfnrt]|u[\da-fA-F]{4})/g,aZ=/"[^"\\\r\n]*"|true|false|null|-?(?:\d+\.|)\d+(?:[eE][+-]?\d+|)/g,bS=/^-ms-/,aV=/-([\da-z])/gi,M=function(e,b3){return b3.toUpperCase()},bW=function(e){if(l.addEventListener||e.type==="load"||l.readyState==="complete"){bl();bJ.ready()}},bl=function(){if(l.addEventListener){l.removeEventListener("DOMContentLoaded",bW,false);a2.removeEventListener("load",bW,false)}else{l.detachEvent("onreadystatechange",bW);a2.detachEvent("onload",bW)}};bJ.fn=bJ.prototype={jquery:s,constructor:bJ,init:function(e,b5,b4){var b3,b6;if(!e){return this}if(typeof e==="string"){if(e.charAt(0)==="<"&&e.charAt(e.length-1)===">"&&e.length>=3){b3=[null,e,null]}else{b3=br.exec(e)}if(b3&&(b3[1]||!b5)){if(b3[1]){b5=b5 instanceof bJ?b5[0]:b5;bJ.merge(this,bJ.parseHTML(b3[1],b5&&b5.nodeType?b5.ownerDocument||b5:l,true));if(a.test(b3[1])&&bJ.isPlainObject(b5)){for(b3 in b5){if(bJ.isFunction(this[b3])){this[b3](b5[b3])}else{this.attr(b3,b5[b3])}}}return this}else{b6=l.getElementById(b3[2]);if(b6&&b6.parentNode){if(b6.id!==b3[2]){return b4.find(e)}this.length=1;this[0]=b6}this.context=l;this.selector=e;return this}}else{if(!b5||b5.jquery){return(b5||b4).find(e)}else{return this.constructor(b5).find(e)}}}else{if(e.nodeType){this.context=this[0]=e;this.length=1;return this}else{if(bJ.isFunction(e)){return b4.ready(e)}}}if(e.selector!==aG){this.selector=e.selector;this.context=e.context}return bJ.makeArray(e,this)},selector:"",length:0,size:function(){return this.length},toArray:function(){return a4.call(this)},get:function(e){return e==null?this.toArray():(e<0?this[this.length+e]:this[e])},pushStack:function(e){var b3=bJ.merge(this.constructor(),e);b3.prevObject=this;b3.context=this.context;return b3},each:function(b3,e){return bJ.each(this,b3,e)},ready:function(e){bJ.ready.promise().done(e);return this},slice:function(){return this.pushStack(a4.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(b4){var e=this.length,b3=+b4+(b4<0?e:0);return this.pushStack(b3>=0&&b30){return}ai.resolveWith(l,[bJ]);if(bJ.fn.trigger){bJ(l).trigger("ready").off("ready")}},isFunction:function(e){return bJ.type(e)==="function"},isArray:Array.isArray||function(e){return bJ.type(e)==="array"},isWindow:function(e){return e!=null&&e==e.window},isNumeric:function(e){return !isNaN(parseFloat(e))&&isFinite(e)},type:function(e){if(e==null){return String(e)}return typeof e==="object"||typeof e==="function"?aa[z.call(e)]||"object":typeof e},isPlainObject:function(b5){if(!b5||bJ.type(b5)!=="object"||b5.nodeType||bJ.isWindow(b5)){return false}try{if(b5.constructor&&!V.call(b5,"constructor")&&!V.call(b5.constructor.prototype,"isPrototypeOf")){return false}}catch(b4){return false}var b3;for(b3 in b5){}return b3===aG||V.call(b5,b3)},isEmptyObject:function(b3){var e;for(e in b3){return false}return true},error:function(e){throw new Error(e)},parseHTML:function(b6,b4,b5){if(!b6||typeof b6!=="string"){return null}if(typeof b4==="boolean"){b5=b4;b4=false}b4=b4||l;var b3=a.exec(b6),e=!b5&&[];if(b3){return[b4.createElement(b3[1])]}b3=bJ.buildFragment([b6],b4,e);if(e){bJ(e).remove()}return bJ.merge([],b3.childNodes)},parseJSON:function(e){if(a2.JSON&&a2.JSON.parse){return a2.JSON.parse(e)}if(e===null){return e}if(typeof e==="string"){e=bJ.trim(e);if(e){if(bh.test(e.replace(bG,"@").replace(aZ,"]").replace(bk,""))){return(new Function("return "+e))()}}}bJ.error("Invalid JSON: "+e)},parseXML:function(b5){var b3,b4;if(!b5||typeof b5!=="string"){return null}try{if(a2.DOMParser){b4=new DOMParser();b3=b4.parseFromString(b5,"text/xml")}else{b3=new ActiveXObject("Microsoft.XMLDOM");b3.async="false";b3.loadXML(b5)}}catch(b6){b3=aG}if(!b3||!b3.documentElement||b3.getElementsByTagName("parsererror").length){bJ.error("Invalid XML: "+b5)}return b3},noop:function(){},globalEval:function(e){if(e&&bJ.trim(e)){(a2.execScript||function(b3){a2["eval"].call(a2,b3)})(e)}},camelCase:function(e){return e.replace(bS,"ms-").replace(aV,M)},nodeName:function(b3,e){return b3.nodeName&&b3.nodeName.toLowerCase()===e.toLowerCase()},each:function(b7,b8,b3){var b6,b4=0,b5=b7.length,e=ab(b7);if(b3){if(e){for(;b40&&(b3-1) in b4)}w=bJ(l);var bY={};function ae(b3){var e=bY[b3]={};bJ.each(b3.match(ac)||[],function(b5,b4){e[b4]=true});return e}bJ.Callbacks=function(cc){cc=typeof cc==="string"?(bY[cc]||ae(cc)):bJ.extend({},cc);var b6,b5,e,b7,b8,b4,b9=[],ca=!cc.once&&[],b3=function(cd){b5=cc.memory&&cd;e=true;b8=b4||0;b4=0;b7=b9.length;b6=true;for(;b9&&b8-1){b9.splice(ce,1);if(b6){if(ce<=b7){b7--}if(ce<=b8){b8--}}}})}return this},has:function(cd){return cd?bJ.inArray(cd,b9)>-1:!!(b9&&b9.length)},empty:function(){b9=[];return this},disable:function(){b9=ca=b5=aG;return this},disabled:function(){return !b9},lock:function(){ca=aG;if(!b5){cb.disable()}return this},locked:function(){return !ca},fireWith:function(ce,cd){cd=cd||[];cd=[ce,cd.slice?cd.slice():cd];if(b9&&(!e||ca)){if(b6){ca.push(cd)}else{b3(cd)}}return this},fire:function(){cb.fireWith(this,arguments);return this},fired:function(){return !!e}};return cb};bJ.extend({Deferred:function(b4){var b3=[["resolve","done",bJ.Callbacks("once memory"),"resolved"],["reject","fail",bJ.Callbacks("once memory"),"rejected"],["notify","progress",bJ.Callbacks("memory")]],b5="pending",b6={state:function(){return b5},always:function(){e.done(arguments).fail(arguments);return this},then:function(){var b7=arguments;return bJ.Deferred(function(b8){bJ.each(b3,function(ca,b9){var cc=b9[0],cb=bJ.isFunction(b7[ca])&&b7[ca];e[b9[1]](function(){var cd=cb&&cb.apply(this,arguments);if(cd&&bJ.isFunction(cd.promise)){cd.promise().done(b8.resolve).fail(b8.reject).progress(b8.notify)}else{b8[cc+"With"](this===b6?b8.promise():this,cb?[cd]:arguments)}})});b7=null}).promise()},promise:function(b7){return b7!=null?bJ.extend(b7,b6):b6}},e={};b6.pipe=b6.then;bJ.each(b3,function(b8,b7){var ca=b7[2],b9=b7[3];b6[b7[1]]=ca.add;if(b9){ca.add(function(){b5=b9},b3[b8^1][2].disable,b3[2][2].lock)}e[b7[0]]=function(){e[b7[0]+"With"](this===e?b6:this,arguments);return this};e[b7[0]+"With"]=ca.fireWith});b6.promise(e);if(b4){b4.call(e,e)}return e},when:function(b6){var b4=0,b8=a4.call(arguments),e=b8.length,b3=e!==1||(b6&&bJ.isFunction(b6.promise))?e:0,cb=b3===1?b6:bJ.Deferred(),b5=function(cd,ce,cc){return function(cf){ce[cd]=this;cc[cd]=arguments.length>1?a4.call(arguments):cf;if(cc===ca){cb.notifyWith(ce,cc)}else{if(!(--b3)){cb.resolveWith(ce,cc)}}}},ca,b7,b9;if(e>1){ca=new Array(e);b7=new Array(e);b9=new Array(e);for(;b4
a";cd=b3.getElementsByTagName("*");cb=b3.getElementsByTagName("a")[0];if(!cd||!cb||!cd.length){return{}}cc=l.createElement("select");b5=cc.appendChild(l.createElement("option"));ca=b3.getElementsByTagName("input")[0];cb.style.cssText="top:1px;float:left;opacity:.5";ce={getSetAttribute:b3.className!=="t",leadingWhitespace:b3.firstChild.nodeType===3,tbody:!b3.getElementsByTagName("tbody").length,htmlSerialize:!!b3.getElementsByTagName("link").length,style:/top/.test(cb.getAttribute("style")),hrefNormalized:cb.getAttribute("href")==="/a",opacity:/^0.5/.test(cb.style.opacity),cssFloat:!!cb.style.cssFloat,checkOn:!!ca.value,optSelected:b5.selected,enctype:!!l.createElement("form").enctype,html5Clone:l.createElement("nav").cloneNode(true).outerHTML!=="<:nav>",boxModel:l.compatMode==="CSS1Compat",deleteExpando:true,noCloneEvent:true,inlineBlockNeedsLayout:false,shrinkWrapBlocks:false,reliableMarginRight:true,boxSizingReliable:true,pixelPosition:false};ca.checked=true;ce.noCloneChecked=ca.cloneNode(true).checked;cc.disabled=true;ce.optDisabled=!b5.disabled;try{delete b3.test}catch(b8){ce.deleteExpando=false}ca=l.createElement("input");ca.setAttribute("value","");ce.input=ca.getAttribute("value")==="";ca.value="t";ca.setAttribute("type","radio");ce.radioValue=ca.value==="t";ca.setAttribute("checked","t");ca.setAttribute("name","t");b9=l.createDocumentFragment();b9.appendChild(ca);ce.appendChecked=ca.checked;ce.checkClone=b9.cloneNode(true).cloneNode(true).lastChild.checked;if(b3.attachEvent){b3.attachEvent("onclick",function(){ce.noCloneEvent=false});b3.cloneNode(true).click()}for(b6 in {submit:true,change:true,focusin:true}){b3.setAttribute(b7="on"+b6,"t");ce[b6+"Bubbles"]=b7 in a2||b3.attributes[b7].expando===false}b3.style.backgroundClip="content-box";b3.cloneNode(true).style.backgroundClip="";ce.clearCloneStyle=b3.style.backgroundClip==="content-box";bJ(function(){var cf,ci,ch,cg="padding:0;margin:0;border:0;display:block;box-sizing:content-box;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;",e=l.getElementsByTagName("body")[0];if(!e){return}cf=l.createElement("div");cf.style.cssText="border:0;width:0;height:0;position:absolute;top:0;left:-9999px;margin-top:1px";e.appendChild(cf).appendChild(b3);b3.innerHTML="