pax_global_header00006660000000000000000000000064137321442170014516gustar00rootroot0000000000000052 comment=80a6b150febb5c0c91f2daa433cc089ff1278841 postgresql-autodoc-1.41+20200921/000077500000000000000000000000001373214421700161755ustar00rootroot00000000000000postgresql-autodoc-1.41+20200921/.gitignore000066400000000000000000000001471373214421700201670ustar00rootroot00000000000000.*.swp *~ ChangeLog.bak postgresql_autodoc config.status config.mk config.log env postgresql_autodoc.1 postgresql-autodoc-1.41+20200921/ChangeLog000066400000000000000000000225661373214421700177620ustar00rootroot000000000000002012-01-05 10:30 rbt * postgresql_autodoc.pl: Mark 1.41 release 2012-01-05 10:22 rbt * xml.tmpl: Update schema for Docbook 4.5. Use where possible. 2010-01-24 15:01 rbt * postgresql_autodoc.pl: Apply a fix to the function signatures that was reversing argument names. Patch created by Jyrki Muukkonen and submitted via Peter Eisentraut 2009-08-11 14:50 rbt * Makefile: Add the manpage to the release tarball. 2009-08-06 22:48 rbt * Makefile, postgresql_autodoc.1.in: Add a manpage for PostgreSQL Autodoc (Tim Retout) It always installs at the moment (BSD Port won't like that). 2009-06-13 07:06 rbt * postgresql_autodoc.pl: Correct the synopsis (Peter Eisentraut) 2009-04-30 22:31 rbt * ChangeLog: Finally, a 1.40 release changelog. 2009-04-30 22:31 rbt * ChangeLog, Makefile: Seems tar on FreeBSD has changed behaviour since the last release. Copy files to new location and build tarball out of that directory. 2009-04-30 22:14 rbt * ChangeLog, Makefile: Really fix the release process. 2009-04-30 22:13 rbt * postgresql_autodoc.pl: Tag for 1.40 2009-04-30 22:11 rbt * ChangeLog: Another crack at 1.40 2009-04-30 22:10 rbt * Makefile: make release was broken. 2009-04-30 22:09 rbt * ChangeLog: Release notes for 1.40 2009-04-26 13:01 rbt * postgresql_autodoc.pl: Don't need relhasindexes, relhasrules, or relhastriggers since older DBs have been dropped. This also makes up 8.4 compatible without additional changes required. 2009-04-24 00:43 rbt * postgresql_autodoc.pl: Improve wording for --table help (Jamie Nicholson) 2009-04-24 00:41 rbt * postgresql_autodoc.pl: Add -m to only output structures or functions matching a specific regex (Simon Michael) Tidy code along the way. 2009-04-24 00:21 rbt * postgresql_autodoc.pl: Remove support for PostgreSQL versions earlier than 8.0. Do a better job of named function parameters. 2009-04-24 00:04 rbt * postgresql_autodoc.pl: Minor fixes for named functions for newer DBD::Pg's. A long time ago DBD::Pg changed from returning an array string to returning an actual array for PostgreSQL arrays. Autodoc did not take this into account. 2009-04-23 23:46 rbt * Makefile, config.mk.in, configure.ac: Dramatically simplify installation process (Thanks Simon Bertrang). Complex installation checks have never been required so drop it and replace with simple make functionality. Targetting to a different installation prefix is done by setting the PREFIX variable: make PREFIX=/some/new/home install 2009-04-23 23:39 rbt * config.mk.in: Use datarootdir instead of datadir 2008-03-16 19:30 rbt * ChangeLog: 1.31 actual 2008-03-12 15:00 rbt * postgresql_autodoc.pl: Clean up formatting. 2008-03-12 15:00 rbt * postgresql_autodoc.pl: Do not output all functions in the database if a specific table list was requested. Good chance it isn't functions the user is after. 2008-03-12 14:55 rbt * postgresql_autodoc.pl: Keep foreign key defaults (0) when referencing a table which does not exist in our output. 2008-03-12 14:50 rbt * postgresql_autodoc.pl: Leonardo Cezar patches: One of them to fix a odd behavior when we had views in schema to schema to extract some reporter. Previously we got an error on trying to get statistcs from a view, since views hasn't stats. The other one to choose which table(s) we want to collect information with --table=foo,bar,x ... option. Although it's functional we still get warning when dealing with table FKs that weren't selected together; among others gotchas ;-). Unfortunately I didn't find any mailing lists or forum to treat this issue neither at pgfoundry nor autodoc's website. -- FKeys still unfixed. Statistics portion cleaned up by me. Virtually no testing of input for comma separated values but oh well. Will follow with a fix to all the "gotchas" 2008-03-12 14:24 rbt * ChangeLog, postgresql_autodoc.pl: Tag for 1.31 release. After additional testing to ensure Foriegn Keys work again this is what will go out. 2008-03-12 14:21 rbt * postgresql_autodoc.pl: New versions of DBD::Pg like to return an array reference instead of the special text string that PostgreSQL returns. This might take care of the issue, although only partially tested (FreeBSD ports doesn't have the newer version yet as it breaks a number of applications). 2007-01-02 09:44 rbt * ChangeLog: Release 1.30 2007-01-02 08:53 rbt * configure.ac, postgresql_autodoc.pl: Toshio Kuratomi wrote a patch to allow a password to be provided via prompt. 2006-05-16 15:01 rbt * html.tmpl, postgresql_autodoc.pl, xml.tmpl: Assume the strings are in UTF8 instead of in ASCII. This expands on the number of characters that we can work with. While not perfect, it is an improvement. Submitted by: Dirk Jagdmann 2006-05-16 14:58 rbt * config.mk.in: The datadir should be /usr/local/share/postgresql_autodoc by default instead of /usr/local/share. Submitted by: Dirk Jagdmann 2006-05-16 14:57 rbt * Makefile: Add an uninstall target. Submitted by: Dirk Jagdmann 2006-03-17 17:32 rbt * postgresql_autodoc.pl: Do not prepare() the statistics query if it is not going to be used. Prepare() and finish() on an empty string causes DBD::Pg to complain about the finish(). 2006-02-12 20:15 rbt * html.tmpl, postgresql_autodoc.pl, regressdatabase.sql: Add in very simple inheritance documentation. Templates other than HTML need to be updated. Remove support for PostgreSQL 7.1 and 7.2. Continues to support 7.3 and later. 2006-02-12 15:27 rbt * html.tmpl: Display index information in HTML output. 2005-05-02 22:18 rbt * ChangeLog: Changelog for the 1.25 release. 2005-05-02 22:17 rbt * zigzag.dia.tmpl: Prevent Dia 0.94 from crashing by rearranging the Dependency attributes. 2005-05-02 22:07 rbt * dia.tmpl, zigzag.dia.tmpl: Eliminate Dia warnings for newer versions of dia (new elements). Luckily, Dia will completely ignore these saved values and calculate it's own instead which is convenient because we really have no idea where to place things. 2005-05-01 01:32 rbt * Makefile, neato.tmpl: Template for the Neato and Twopi tools (part of GraphViz) submitted by Tony Wasson. 2005-05-01 01:13 rbt * postgresql_autodoc.pl: Finish() the GetVersion sql statement. Set the FK Group number back to being a number instead of the schema name. Accidentally converted earlier while removing the $group variable. 2005-05-01 01:07 rbt * dot.tmpl: Matt Bosworth has provided an updated dot template with PK, FK information and improved styling. Rod proceeded to break several things, but eventually got it working again. 2005-04-18 19:53 rbt * postgresql_autodoc.pl: Fix GraphViz foreign key links (they were missing). 2005-04-18 19:48 rbt * zigzag.dia.tmpl: The latest version of dia crashes when I open a dia file produced with the zigzag.dia.tmpl template. I replaced the following lines in zigzag.dia.tmpl with and dia no longer crashes. best whishes, Piebe de Vries 2005-02-27 14:39 rbt * postgresql_autodoc.pl: Bump copyright date. 2005-02-27 14:38 rbt * postgresql_autodoc.pl: Do a little bit of cleaning on this old and crufty code. 2005-02-27 13:08 rbt * postgresql_autodoc.pl: Include argument names as provided by Jacob Smullyan. Clean up handling of argument types from a schema not in the current path. This means getting rid of the schema prefix added by format_type so when can add our own when appropriate. 2004-10-13 09:46 rbt * ChangeLog, postgresql_autodoc.pl: Push out 1.24 2004-10-12 09:33 rbt * postgresql_autodoc.pl: Strip out quotes in the comparison area. 2004-06-29 12:40 rbt * dia.tmpl, postgresql_autodoc.pl, zigzag.dia.tmpl: From: Bort, Paul Subject: RE: Patch for Dia templates for Postgresql_AutoDoc Date: Tue, 22 Jun 2004 15:23:24 -0400 I've used the same method of shortening as the constraint you mentioned, but I made it a new entry (table_comment_dia) in the hash rather than mangle the existing one. I tried to follow the existing naming conventions and coding style. I picked a total of 45 characters because that seems to be about the right size on my screen. In testing I also found a bug in Dia that causes a crash if a comment is too long, but that seems to be in the 100+ character range, so I'm not worried about 45. Here are the patches for postgresql_autodoc.pl, dia.tmpl, and zigzag.dia.tmpl. 2004-06-02 21:46 rbt * ChangeLog, postgresql_autodoc.pl: 1.23 Release 2004-06-01 11:22 rbt * postgresql_autodoc.pl: Handle the inability to connect to the database (print real error message). Use pg_get_constraintdef() for 7.4 and newer databases. PostgreSQL does the formatting all by itself now. 2004-05-12 12:00 rbt * dia.tmpl, ChangeLog, .cvsignore, Makefile, configure.ac, html.tmpl, config.mk.in, dot.tmpl, install-sh, postgresql_autodoc.pl, regressdatabase.sql, xml.tmpl, zigzag.dia.tmpl: Initial revision 2004-05-12 12:00 rbt * dia.tmpl, ChangeLog, .cvsignore, Makefile, configure.ac, html.tmpl, config.mk.in, dot.tmpl, install-sh, postgresql_autodoc.pl, regressdatabase.sql, xml.tmpl, zigzag.dia.tmpl: Migrate code from sourceforge into pgfoundry postgresql-autodoc-1.41+20200921/Dockerfile000066400000000000000000000010201373214421700201600ustar00rootroot00000000000000# USAGE: # docker build . -t autodoc # docker run --mount src=`pwd`,target=/app,type=bind --network=host --env-file ./env -v $pwd:/app autodoc FROM ubuntu:18.04 WORKDIR /app RUN apt-get update RUN apt-get install -y libdbi-perl \ libhtml-template-perl libterm-readkey-perl \ libdbd-pg-perl make COPY . . RUN make install ENV DATABASE=app \ HOST=localhost \ PORT=port \ USER=app \ PASSWORD=password CMD postgresql_autodoc -d ${DATABASE} \ -h ${HOST} \ -p ${PORT} \ -u ${USER} \ --password=${PASSWORD} postgresql-autodoc-1.41+20200921/Makefile000066400000000000000000000037461373214421700176470ustar00rootroot00000000000000# $Header: /cvsroot/autodoc/autodoc/Makefile,v 1.9 2009/08/11 18:50:02 rbt Exp $ # install configuration DESTDIR = PREFIX = /usr/local BINDIR = ${PREFIX}/bin DATADIR = ${PREFIX}/share/postgresql_autodoc MANDIR = ${PREFIX}/share/man/man1 # build configuration TEMPLATES = dia.tmpl dot.tmpl dot_shortfk.tmpl html.tmpl neato.tmpl xml.tmpl zigzag.dia.tmpl BINARY = postgresql_autodoc SOURCE = ${BINARY}.pl MANPAGE = ${BINARY}.1 MANPAGE_SOURCE = ${MANPAGE}.in RELEASE_FILES = Makefile ChangeLog ${SOURCE} ${TEMPLATES} ${MANPAGE} RELEASE_DIR=postgresql_autodoc # system tools INSTALL_SCRIPT = $$(which install) -c PERL = $$(which perl) SED = $$(which sed) all: ${BINARY} ${MANPAGE} ${MANPAGE}: ${MANPAGE_SOURCE} ${SED} -e "s,@@TEMPLATE-DIR@@,${DATADIR}," \ ${MANPAGE_SOURCE} > ${MANPAGE} ${BINARY}: ${SOURCE} ${SED} -e "s,/usr/bin/env perl,${PERL}," \ -e "s,@@TEMPLATE-DIR@@,${DATADIR}," \ ${SOURCE} > ${BINARY} -chmod +x ${BINARY} install: all ${INSTALL_SCRIPT} -d ${DESTDIR}${BINDIR} ${INSTALL_SCRIPT} -d ${DESTDIR}${DATADIR} ${INSTALL_SCRIPT} -d ${DESTDIR}${MANDIR} ${INSTALL_SCRIPT} -m 755 ${BINARY} ${DESTDIR}${BINDIR} for entry in ${TEMPLATES} ; \ do ${INSTALL_SCRIPT} -m 644 $${entry} ${DESTDIR}${DATADIR} ; \ done ${INSTALL_SCRIPT} ${MANPAGE} ${DESTDIR}${MANDIR} uninstall: -rm ${DESTDIR}${BINDIR}/${BINARY} -for entry in ${TEMPLATES} ; \ do rm ${DESTDIR}${DATADIR}/$${entry} ; \ done -rm ${DESTDIR}${MANDIR}/${MANPAGE} -rmdir ${DESTDIR}${MANDIR} -rmdir ${DESTDIR}${DATADIR} -rmdir ${DESTDIR}${BINDIR} clean: rm -f ${BINARY} release: clean ${RELEASE_FILES} @if [ -z ${VERSION} ] ; then \ echo "-------------------------------------------"; \ echo "VERSION needs to be specified for a release"; \ echo "-------------------------------------------"; \ false; \ fi cvs2cl -cvs commit mkdir ${RELEASE_DIR} && cp ${RELEASE_FILES} ${RELEASE_DIR} && tar -czvf ${RELEASE_DIR}-${VERSION}.tar.gz ${RELEASE_DIR} rm -r ${RELEASE_DIR} .PHONY: install uninstall clean release postgresql-autodoc-1.41+20200921/README.org000066400000000000000000000112751373214421700176510ustar00rootroot00000000000000* PostgreSQL Autodoc This is a utility which will run through PostgreSQL system tables and returns HTML, DOT, and several styles of XML which describe the database. As a result, documentation about a project can be generated quickly and be automatically updatable, yet have a quite professional look if you do some DSSSL/CSS work. ** Requirements This program requires the following perl modules: - DBI - HTML::Template - Term::ReadKey - DBD::Pg ** Installation *** Ubuntu (Debian should be similar/identical) **** from the universe repository ~$ sudo apt install postgresql-autodoc~ **** from github ~$ sudo apt install libdbi-perl libhtml-template-perl libterm-readkey-perl libdbd-pg-perl~ ~$ git clone https://github.com/cbbrowne/autodoc.git~ ~$ cd autodoc~ ~$ sudo make install~ *** Docker Install [Docker](https://www.docker.com/) Create a Docker Image: ~$ docker build . -t autodoc~ Running Docker: Create an env file by copying the example file and filling in details: ~$ cp env-example env~ Run a Docker container (with default command in Dockerfile): ~$ docker run --mount src=`pwd`,target=/app,type=bind --network=host --env-file ./env -v $pwd:/app autodoc~ Note: --network=host is for accessing a database on localhost Run a custom container (see Usage below; if you want the files to appear in current directory, use the `--mount` arg above): ~$ docker run autodoc postgresql_autodoc [options]~ ** Usage ~postgresql_autodoc [options]~ ** Options - ~−d~ :: - Specify database name to connect to (default: current user) - ~−f~ :: Specify output file prefix (default: current user) - ~−h~ :: Specify database server host (default: localhost) - ~−p~ :: Specify database server port (default: 5432) - ~−u~ :: Specify database username (default: current user) - ~−−password=~ :: Specify database password (default: blank) - ~−w~ :: Use ~/.pgpass for authentication (overrides all other password options) - ~−l~ :: Path to the templates (default: @@TEMPLATE-DIR@@) - ~−t~ :: Type of output wanted (default: All in template library) + _html_ :: The HTML is human readable (via web browser), representing the entire schema within a single HTML document, and includes referenceable labels for each object. + _dia_ :: This remaps the schema into XML using the XML schema of [[https://git.gnome.org/browse/dia/][Dia]], an interactive diagramming tool. It does not do any automated layout, so making the diagram usable would require manual work, so this is often not terribly useful. + _xml_ :: The second type of XML is similar to HTML, but is in DocBook 4 format. It enables you to mix schema documentation with other DocBook documentation via the XREFs, generating PDFs, HTML, RTF, or other formatted documents. Object references can be made between these tools and JavaDoc with use of appropriate XREFs (see ~xreflabel~ elements in the XML). + _neato_ :: This generates the schema in the form accepted by GraphViz ~neato~, which draws the schema as an undirected graph + _zigzag.dia_ :: This generates a diagram for Dia in another form - ~−s~ :: Specify a specific schema to match. Technically this is a regular expression but anything other than a specific name may have unusual results. - ~−m~ :: Show only tables/objects with names matching the specified regular expression. - ~−−table=~ :: Tables to export. Multiple tables may be provided using a comma-separated list, i.e. table,table2,table3. - ~−−statistics~ :: In 7.4 and later, with the contrib module pgstattuple installed we can gather statistics on the tables in the database (average size, free space, disk space used, dead tuple counts, etc.) This is disk intensive on large databases as all pages must be visited. - ~--comment-filter=~ :: Processes comments with a program. This is useful when comments are written using some markup languague (wiki/markdown), it allows conversion of structured text to desired output format. For each comment is executed command , where is the desired output format as specified by -t, is path to file with original comment text. Processor should read this file, do processing and write result to standard output. ** AUTHOR Originally, this code base was derived from the [[http://pgfoundry.org/scm/?group_id=1000009][CVS repository on pgFoundry]], shifted to GitHub due to the popularity of maintenance via Git. The presence of further contributions to the codebase seems to confirm that this was a decent idea. Rod Taylor postgresql-autodoc-1.41+20200921/dia.tmpl000066400000000000000000000205421373214421700176330ustar00rootroot00000000000000 "> ## ## ## #PK # ## ## ## ## ## "> ## " connection=""/> " connection=""/> postgresql-autodoc-1.41+20200921/dot.tmpl000066400000000000000000000033541373214421700176660ustar00rootroot00000000000000digraph g { graph [ rankdir = "LR", concentrate = true, ratio = auto ]; node [ fontsize = "10", shape = record ]; edge [ ]; "." [shape = plaintext, label = <
\N
" > PK FK ">
> ];
".":rtcol -> ".":ltcol [label=""]; } postgresql-autodoc-1.41+20200921/dot_shortfk.tmpl000066400000000000000000000033411373214421700214220ustar00rootroot00000000000000digraph g { graph [ rankdir = "LR", concentrate = true, ratio = auto ]; node [ fontsize = "10", shape = record ]; edge [ ]; "." [fontname = helvetica, shape = plaintext, label = <
\N
" > PK FK ">
> ];
".":rtcol -> ".":ltcol [label=" "]; } postgresql-autodoc-1.41+20200921/env-example000066400000000000000000000001641373214421700203420ustar00rootroot00000000000000# Copy, rename to env, and fill in your db details DATABASE=app HOST=localhost PORT=5432 USER=app PASSWORD=password postgresql-autodoc-1.41+20200921/html.tmpl000066400000000000000000000302451373214421700200430ustar00rootroot00000000000000 Index for <!-- TMPL_VAR ESCAPE="HTML" name="database" -->



Dumped on

Index of database -

-->

Schema


View:Table: ">.">

tr0tr1">
. Structure
F-Key Name Type Description
">..# PRIMARY KEY UNIQUE# NOT NULL DEFAULT

Table . Inherits ">.,

 

Statistics
Total Space (disk usage) Tuple Count Active Space Dead Space Free Space

 

tr0tr1">
. Constraints
Name Constraint

Tables referencing this one via Foreign Key Constraints:

 

tr0tr1">
Permissions which apply to .
User
Select
Insert
Update
Delete
Reference
Rule
Trigger

Index - ">Schema


Function: ">.">

Returns:

Language:

Generated by PostgreSQL Autodoc

W3C HTML 4.01 Strict

postgresql-autodoc-1.41+20200921/install-sh000077500000000000000000000126711373214421700202100ustar00rootroot00000000000000#!/bin/sh # # install - install a program, script, or datafile # This comes from X11R5 (mit/util/scripts/install.sh). # # Copyright 1991 by the Massachusetts Institute of Technology # # Permission to use, copy, modify, distribute, and sell this software and its # documentation for any purpose is hereby granted without fee, provided that # the above copyright notice appear in all copies and that both that # copyright notice and this permission notice appear in supporting # documentation, and that the name of M.I.T. not be used in advertising or # publicity pertaining to distribution of the software without specific, # written prior permission. M.I.T. makes no representations about the # suitability of this software for any purpose. It is provided "as is" # without express or implied warranty. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. It can only install one file at a time, a restriction # shared with many OS's install programs. # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" cpprog="${CPPROG-cp}" chmodprog="${CHMODPROG-chmod}" chownprog="${CHOWNPROG-chown}" chgrpprog="${CHGRPPROG-chgrp}" stripprog="${STRIPPROG-strip}" rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" transformbasename="" transform_arg="" instcmd="$mvprog" chmodcmd="$chmodprog 0755" chowncmd="" chgrpcmd="" stripcmd="" rmcmd="$rmprog -f" mvcmd="$mvprog" src="" dst="" dir_arg="" while [ x"$1" != x ]; do case $1 in -c) instcmd="$cpprog" shift continue;; -d) dir_arg=true shift continue;; -m) chmodcmd="$chmodprog $2" shift shift continue;; -o) chowncmd="$chownprog $2" shift shift continue;; -g) chgrpcmd="$chgrpprog $2" shift shift continue;; -s) stripcmd="$stripprog" shift continue;; -t=*) transformarg=`echo $1 | sed 's/-t=//'` shift continue;; -b=*) transformbasename=`echo $1 | sed 's/-b=//'` shift continue;; *) if [ x"$src" = x ] then src=$1 else # this colon is to work around a 386BSD /bin/sh bug : dst=$1 fi shift continue;; esac done if [ x"$src" = x ] then echo "install: no input file specified" exit 1 else : fi if [ x"$dir_arg" != x ]; then dst=$src src="" if [ -d $dst ]; then instcmd=: chmodcmd="" else instcmd=$mkdirprog fi else # Waiting for this to be detected by the "$instcmd $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if [ -f $src -o -d $src ] then : else echo "install: $src does not exist" exit 1 fi if [ x"$dst" = x ] then echo "install: no destination specified" exit 1 else : fi # If destination is a directory, append the input filename; if your system # does not like double slashes in filenames, you may need to add some logic if [ -d $dst ] then dst="$dst"/`basename $src` else : fi fi ## this sed command emulates the dirname command dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` # Make sure that the destination directory exists. # this part is taken from Noah Friedman's mkinstalldirs script # Skip lots of stat calls in the usual case. if [ ! -d "$dstdir" ]; then defaultIFS=' ' IFS="${IFS-${defaultIFS}}" oIFS="${IFS}" # Some sh's can't handle IFS=/ for some reason. IFS='%' set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'` IFS="${oIFS}" pathcomp='' while [ $# -ne 0 ] ; do pathcomp="${pathcomp}${1}" shift if [ ! -d "${pathcomp}" ] ; then $mkdirprog "${pathcomp}" else : fi pathcomp="${pathcomp}/" done fi if [ x"$dir_arg" != x ] then $doit $instcmd $dst && if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else : ; fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else : ; fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else : ; fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else : ; fi else # If we're going to rename the final executable, determine the name now. if [ x"$transformarg" = x ] then dstfile=`basename $dst` else dstfile=`basename $dst $transformbasename | sed $transformarg`$transformbasename fi # don't allow the sed command to completely eliminate the filename if [ x"$dstfile" = x ] then dstfile=`basename $dst` else : fi # Make a temp file name in the proper directory. dsttmp=$dstdir/#inst.$$# # Move or copy the file name to the temp name $doit $instcmd $src $dsttmp && trap "rm -f ${dsttmp}" 0 && # and set any options; do chmod last to preserve setuid bits # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $instcmd $src $dsttmp" command. if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else :;fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else :;fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else :;fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else :;fi && # Now rename the file to the real destination. $doit $rmcmd -f $dstdir/$dstfile && $doit $mvcmd $dsttmp $dstdir/$dstfile fi && exit 0 postgresql-autodoc-1.41+20200921/neato.tmpl000066400000000000000000000016141373214421700202030ustar00rootroot00000000000000digraph g { node [ fontsize = "10", shape = record ]; edge []; "." [shape = record, label = "{ \N| : \l}" ]; "." -> "." [label=""]; } postgresql-autodoc-1.41+20200921/postgresql_autodoc.1.in000066400000000000000000000044311373214421700226070ustar00rootroot00000000000000.TH POSTGRESQL_AUTODOC 1 "19 July 2009" .SH NAME postgresql_autodoc \- creates postgresql tables overview .SH SYNOPSIS .B postgresql_autodoc [options] .SH DESCRIPTION This is a utility which will run through PostgreSQL system tables and returns HTML, DOT, and 2 styles of XML which describes the database. .PP The HTML is human readable (via webbrowser). The first style of XML is actually the fileformat of Dia, a UML diagram tool. The second type of XML is similar to the HTML but in the Docbook 4 format. It enables yous to mix in other docbook documentation via the XREFs, generating PDFs, HTML, RTF, or other formatted documents. Between these tools and JavaDoc with the appropriate XREFs, documentation about a project can be generated quickly and be easily updatable yet have a very professional look with some DSSSL work. .SH OPTIONS .TP .B \-d Specify database name to connect to (default: current user) .TP .B \-f Specify output file prefix (default: current user) .TP .B \-h Specify database server host (default: localhost) .TP .B \-p Specify database server port (default: 5432) .TP .B \-u Specify database username (default: current user) .TP .B \-\-password= Specify database password (default: blank) If no password is specified, one is prompted for. .TP .B \-w Use ~/.pgpass for authentication (overrides all other password options) .TP .B \-l Path to the templates (default: @@TEMPLATE-DIR@@) .TP .B \-t Type of output wanted (default: All in template library) .TP .B \-s Specify a specific schema to match. Technically this is a regular expression but anything other than a specific name may have unusual results. .TP .B \-m Show only tables/objects with names matching the specified regular expression. .TP .B \-\-table= Tables to export. Multiple tables may be provided using a comma-separated list, i.e. table,table2,table3. .TP .B \-\-statistics In 7.4 and later, with the contrib module pgstattuple installed we can gather statistics on the tables in the database (average size, free space, disk space used, dead tuple counts, etc.) This is disk intensive on large databases as all pages must be visited. .SH AUTHOR Rod Taylor postgresql-autodoc-1.41+20200921/postgresql_autodoc.pl000077500000000000000000002152151373214421700224640ustar00rootroot00000000000000#!/usr/bin/env perl # -- # -*- Perl -*-w # $Header: /cvsroot/autodoc/autodoc/postgresql_autodoc.pl,v 1.30 2012/01/05 15:30:33 rbt Exp $ # Imported 1.22 2002/02/08 17:09:48 into sourceforge # Postgres Auto-Doc Version 1.41 # License # ------- # Copyright (c) 2001-2009, Rod Taylor # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # # 3. Neither the name of the InQuent Technologies Inc. nor the names # of its contributors may be used to endorse or promote products # derived from this software without specific prior written # permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FREEBSD # PROJECT OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # About Project # ------------- # Various details about the project and related items can be found at # the website # # http://www.rbt.ca/autodoc/ use strict; use warnings; use DBI; use Fcntl; # Allows file templates use HTML::Template; # Allow reading a password from stdin use Term::ReadKey; # Used for storing comments when external processor is used use File::Temp qw{ tempfile }; sub main($) { my ($ARGV) = @_; my %db; # The templates path # @@TEMPLATE-DIR@@ will be replaced by make in the build phase my $template_path = '@@TEMPLATE-DIR@@'; # Setup the default connection variables based on the environment my $dbuser = $ENV{'PGUSER'}; $dbuser ||= $ENV{'USER'}; my $database = $ENV{'PGDATABASE'}; $database ||= $dbuser; my $dbhost = $ENV{'PGHOST'}; $dbhost ||= ""; my $dbport = $ENV{'PGPORT'}; $dbport ||= ""; # Determine whether we need a password to connect my $needpass = 0; my $dbpass = ""; my $output_filename_base = $database; # Tracking variables my $dbisset = 0; my $fileisset = 0; my $only_schema; my $only_matching; my $table_out; my $wanted_output = undef; # means all types my $statistics = 0; my $filter_prog; # Fetch base and dirnames. Useful for Usage() my $basename = $0; my $dirname = $0; $basename =~ s|^.*/([^/]+)$|$1|; $dirname =~ s|^(.*)/[^/]+$|$1|; # If template_path isn't defined, lets set it ourselves $template_path = $dirname if ( !defined($template_path) ); for ( my $i = 0 ; $i <= $#ARGV ; $i++ ) { ARGPARSE: for ( $ARGV[$i] ) { # Set the database /^-d$/ && do { $database = $ARGV[ ++$i ]; $dbisset = 1; if ( !$fileisset ) { $output_filename_base = $database; } last; }; # Set the user /^-[uU]$/ && do { $dbuser = $ARGV[ ++$i ]; if ( !$dbisset ) { $database = $dbuser; if ( !$fileisset ) { $output_filename_base = $database; } } last; }; # Set the hostname /^-h$/ && do { $dbhost = $ARGV[ ++$i ]; last; }; # Set the Port /^-p$/ && do { $dbport = $ARGV[ ++$i ]; last; }; # Set the users password /^--password=/ && do { $dbpass = $ARGV[$i]; $dbpass =~ s/^--password=//g; last; }; # Make sure we get a password before attempting to conenct /^--password$/ && do { $needpass = 1; last; }; # Read from .pgpass (override all other password options) /^-w$/ && do { $dbpass = undef; $dbuser = undef; $needpass = 0; last; }; # Set the base of the filename. The extensions pulled # from the templates will be appended to this name /^-f$/ && do { $output_filename_base = $ARGV[ ++$i ]; $fileisset = 1; last; }; # Set the template directory explicitly /^(-l|--library)$/ && do { $template_path = $ARGV[ ++$i ]; last; }; # Set the output type /^(-t|--type)$/ && do { $wanted_output = $ARGV[ ++$i ]; last; }; # User has requested a single schema dump and provided a pattern /^(-s|--schema)$/ && do { $only_schema = $ARGV[ ++$i ]; last; }; # User has requested only tables/objects matching a pattern /^(-m|--matching)$/ && do { $only_matching = $ARGV[ ++$i ]; last; }; # One might dump a table's set (comma-separated) or just one # If dumping a set of specific tables do NOT dump out the functions # in this database. Generates noise in the output # that most likely isn't wanted. Check for $table_out around the # function gathering location. /^--table=/ && do { my $some_table = $ARGV[$i]; $some_table =~ s/^--table=//g; my @tables_in = split( ',', $some_table ); sub single_quote; $table_out = join( ',', map( single_quote, @tables_in ) ); last; }; # Check to see if Statistics have been requested /^--statistics$/ && do { $statistics = 1; last; }; /^--comment-filter=/ && do { my $some_filter_prog = $ARGV[$i]; $some_filter_prog =~ s/^--comment-filter=//g; $filter_prog = $some_filter_prog; last; }; # Help is wanted, redirect user to usage() /^-\?$/ && do { usage( $basename, $database, $dbuser ); last; }; /^--help$/ && do { usage( $basename, $database, $dbuser ); last; }; } } # If no arguments have been provided, connect to the database anyway but # inform the user of what we're doing. if ( $#ARGV <= 0 ) { print <connect( @{$dbConnect} ) or triggerError("Unable to connect due to: $DBI::errstr"); $dbh->do("set client_encoding to 'UTF-8'") or triggerError("could not set client_encoding to UTF-8: $DBI::errstr"); my %struct; $db->{$database}{'STRUCT'} = \%struct; my $struct = $db->{$database}{'STRUCT'}; # PostgreSQL's version is used to determine what queries are required # to retrieve a given information set. if ( $dbh->{pg_server_version} < 70300 ) { die("PostgreSQL 7.3 and later are supported"); } # Ensure we only retrieve information for the requested schemas. # # system_schema -> The primary system schema for a database. # Public is used for verions prior to 7.3 # # system_schema_list -> The list of schemas which we are not supposed # to gather information for. # TODO: Merge with system_schema in array form. # # schemapattern -> The schema the user provided as a command # line option. my $schemapattern = '^'; my $system_schema = 'pg_catalog'; my $system_schema_list = 'pg_catalog|pg_toast|pg_temp_[0-9]+|information_schema'; if ( defined($only_schema) ) { $schemapattern = '^' . $only_schema . '$'; } # and only objects matching the specified pattern, if any my $matchpattern = ''; if ( defined($only_matching) ) { $matchpattern = $only_matching; } # # List of queries which are used to gather information from the # database. The queries differ based on version but should # provide similar output. At some point it should be safe to remove # support for older database versions. # # Fetch the description of the database my $sql_Database = q{ SELECT pg_catalog.obj_description(oid, 'pg_database') as comment FROM pg_catalog.pg_database WHERE datname = '$database'; }; # Pull out a list of tables, views and special structures. my $sql_Tables = qq{ SELECT nspname as namespace , relname as tablename , pg_catalog.pg_get_userbyid(relowner) AS tableowner , pg_class.oid , pg_catalog.obj_description(pg_class.oid, 'pg_class') as table_description , relacl , CASE WHEN relkind = 'r' THEN 'table' WHEN relkind = 's' THEN 'special' ELSE 'view' END as reltype , CASE WHEN relkind = 'v' THEN pg_get_viewdef(pg_class.oid) ELSE NULL END as view_definition FROM pg_catalog.pg_class JOIN pg_catalog.pg_namespace ON (relnamespace = pg_namespace.oid) WHERE relkind IN ('r', 's', 'v') AND relname ~ '$matchpattern' AND nspname !~ '$system_schema_list' AND nspname ~ '$schemapattern' }; $sql_Tables .= qq{ AND relname IN ($table_out)} if defined($table_out); # - uses pg_class.oid my $sql_Columns = q{ SELECT attname as column_name , attlen as column_length , CASE WHEN pg_type.typname = 'int4' AND EXISTS (SELECT TRUE FROM pg_catalog.pg_depend JOIN pg_catalog.pg_class ON (pg_class.oid = objid) WHERE refobjsubid = attnum AND refobjid = attrelid AND relkind = 'S') THEN 'serial' WHEN pg_type.typname = 'int8' AND EXISTS (SELECT TRUE FROM pg_catalog.pg_depend JOIN pg_catalog.pg_class ON (pg_class.oid = objid) WHERE refobjsubid = attnum AND refobjid = attrelid AND relkind = 'S') THEN 'bigserial' ELSE pg_catalog.format_type(atttypid, atttypmod) END as column_type , CASE WHEN attnotnull THEN cast('NOT NULL' as text) ELSE cast('' as text) END as column_null , CASE WHEN pg_type.typname IN ('int4', 'int8') AND EXISTS (SELECT TRUE FROM pg_catalog.pg_depend JOIN pg_catalog.pg_class ON (pg_class.oid = objid) WHERE refobjsubid = attnum AND refobjid = attrelid AND relkind = 'S') THEN NULL ELSE pg_get_expr(adbin, adrelid) END as column_default , pg_catalog.col_description(attrelid, attnum) as column_description , attnum FROM pg_catalog.pg_attribute JOIN pg_catalog.pg_type ON (pg_type.oid = atttypid) LEFT JOIN pg_catalog.pg_attrdef ON ( attrelid = adrelid AND attnum = adnum) WHERE attnum > 0 AND attisdropped IS FALSE AND attrelid = ?; }; my $sql_Table_Statistics; if ( $statistics == 1 ) { if ( $dbh->{pg_server_version} <= 70300 ) { triggerError( "Table statistics supported on PostgreSQL 7.4 and later.\n" . "Remove --statistics flag and try again." ); } $sql_Table_Statistics = q{ SELECT table_len , tuple_count , tuple_len , CAST(tuple_percent AS numeric(20,2)) AS tuple_percent , dead_tuple_count , dead_tuple_len , CAST(dead_tuple_percent AS numeric(20,2)) AS dead_tuple_percent , CAST(free_space AS numeric(20,2)) AS free_space , CAST(free_percent AS numeric(20,2)) AS free_percent FROM pgstattuple(CAST(? AS oid)); }; } my $sql_Indexes = q{ SELECT schemaname , tablename , indexname , substring( indexdef FROM position('(' IN indexdef) + 1 FOR length(indexdef) - position('(' IN indexdef) - 1 ) AS indexdef FROM pg_catalog.pg_indexes WHERE substring(indexdef FROM 8 FOR 6) != 'UNIQUE' AND schemaname = ? AND tablename = ?; }; my $sql_Inheritance = qq{ SELECT parnsp.nspname AS par_schemaname , parcla.relname AS par_tablename , chlnsp.nspname AS chl_schemaname , chlcla.relname AS chl_tablename FROM pg_catalog.pg_inherits JOIN pg_catalog.pg_class AS chlcla ON (chlcla.oid = inhrelid) JOIN pg_catalog.pg_namespace AS chlnsp ON (chlnsp.oid = chlcla.relnamespace) JOIN pg_catalog.pg_class AS parcla ON (parcla.oid = inhparent) JOIN pg_catalog.pg_namespace AS parnsp ON (parnsp.oid = parcla.relnamespace) WHERE chlnsp.nspname = ? AND chlcla.relname = ? AND chlnsp.nspname ~ '$schemapattern' AND parnsp.nspname ~ '$schemapattern'; }; # Fetch the list of PRIMARY and UNIQUE keys my $sql_Primary_Keys = q{ SELECT conname AS constraint_name , pg_catalog.pg_get_indexdef(d.objid) AS constraint_definition , CASE WHEN contype = 'p' THEN 'PRIMARY KEY' ELSE 'UNIQUE' END as constraint_type FROM pg_catalog.pg_constraint AS c JOIN pg_catalog.pg_depend AS d ON (d.refobjid = c.oid) WHERE contype IN ('p', 'u') AND deptype = 'i' AND conrelid = ?; }; # FOREIGN KEY fetch # # Don't return the constraint name if it was automatically generated by # PostgreSQL. The $N (where N is an integer) is not a descriptive enough # piece of information to be worth while including in the various outputs. my $sql_Foreign_Keys = qq{ SELECT pg_constraint.oid , pg_namespace.nspname AS namespace , CASE WHEN substring(pg_constraint.conname FROM 1 FOR 1) = '\$' THEN '' ELSE pg_constraint.conname END AS constraint_name , conkey AS constraint_key , confkey AS constraint_fkey , confrelid AS foreignrelid FROM pg_catalog.pg_constraint JOIN pg_catalog.pg_class ON (pg_class.oid = conrelid) JOIN pg_catalog.pg_class AS pc ON (pc.oid = confrelid) JOIN pg_catalog.pg_namespace ON (pg_class.relnamespace = pg_namespace.oid) JOIN pg_catalog.pg_namespace AS pn ON (pn.oid = pc.relnamespace) WHERE contype = 'f' AND conrelid = ? AND pg_namespace.nspname ~ '$schemapattern' AND pn.nspname ~ '$schemapattern'; }; my $sql_Foreign_Key_Arg = q{ SELECT attname AS attribute_name , relname AS relation_name , nspname AS namespace FROM pg_catalog.pg_attribute JOIN pg_catalog.pg_class ON (pg_class.oid = attrelid) JOIN pg_catalog.pg_namespace ON (relnamespace = pg_namespace.oid) WHERE attrelid = ? AND attnum = ?; }; # Fetch CHECK constraints my $sql_Constraint; $sql_Constraint = q{ SELECT pg_get_constraintdef(oid) AS constraint_source , conname AS constraint_name FROM pg_constraint WHERE conrelid = ? AND contype = 'c'; }; # Query for function information my $sql_Function; my $sql_FunctionArg; $sql_Function = qq{ SELECT proname AS function_name , nspname AS namespace , lanname AS language_name , pg_catalog.obj_description(pg_proc.oid, 'pg_proc') AS comment , proargtypes AS function_args , proargnames AS function_arg_names , prosrc AS source_code , proretset AS returns_set , prorettype AS return_type FROM pg_catalog.pg_proc JOIN pg_catalog.pg_language ON (pg_language.oid = prolang) JOIN pg_catalog.pg_namespace ON (pronamespace = pg_namespace.oid) JOIN pg_catalog.pg_type ON (prorettype = pg_type.oid) WHERE pg_namespace.nspname !~ '$system_schema_list' AND pg_namespace.nspname ~ '$schemapattern' AND proname ~ '$matchpattern' AND proname != 'plpgsql_call_handler'; }; $sql_FunctionArg = q{ SELECT nspname AS namespace , replace( pg_catalog.format_type(pg_type.oid, typtypmod) , nspname ||'.' , '') AS type_name FROM pg_catalog.pg_type JOIN pg_catalog.pg_namespace ON (pg_namespace.oid = typnamespace) WHERE pg_type.oid = ?; }; # Fetch schema information. my $sql_Schema = qq{ SELECT pg_catalog.obj_description(oid, 'pg_namespace') AS comment , nspname as namespace FROM pg_catalog.pg_namespace WHERE pg_namespace.nspname !~ '$system_schema_list' AND pg_namespace.nspname ~ '$schemapattern'; }; my $sth_Columns = $dbh->prepare($sql_Columns); my $sth_Constraint = $dbh->prepare($sql_Constraint); my $sth_Database = $dbh->prepare($sql_Database); my $sth_Foreign_Keys = $dbh->prepare($sql_Foreign_Keys); my $sth_Foreign_Key_Arg = $dbh->prepare($sql_Foreign_Key_Arg); my $sth_Function = $dbh->prepare($sql_Function); my $sth_FunctionArg = $dbh->prepare($sql_FunctionArg); my $sth_Indexes = $dbh->prepare($sql_Indexes); my $sth_Inheritance = $dbh->prepare($sql_Inheritance); my $sth_Primary_Keys = $dbh->prepare($sql_Primary_Keys); my $sth_Schema = $dbh->prepare($sql_Schema); my $sth_Tables = $dbh->prepare($sql_Tables); my $sth_Table_Statistics = $dbh->prepare($sql_Table_Statistics) if ( $statistics == 1 ); # Fetch Database info $sth_Database->execute(); my $dbinfo = $sth_Database->fetchrow_hashref; if ( defined($dbinfo) ) { $db->{$database}{'COMMENT'} = $dbinfo->{'comment'}; } # Fetch tables and all things bound to tables $sth_Tables->execute(); while ( my $tables = $sth_Tables->fetchrow_hashref ) { my $reloid = $tables->{'oid'}; my $relname = $tables->{'tablename'}; my $schema = $tables->{'namespace'}; EXPRESSIONFOUND: # Store permissions my $acl = $tables->{'relacl'}; # Empty acl groups cause serious issues. $acl ||= ''; # Strip array forming 'junk'. $acl =~ s/^{//g; $acl =~ s/}$//g; $acl =~ s/"//g; # Foreach acl foreach ( split( /\,/, $acl ) ) { my ( $user, $raw_permissions ) = split( /=/, $_ ); if ( defined($raw_permissions) ) { if ( $user eq '' ) { $user = 'PUBLIC'; } # The section after the / is the user who granted the permissions my ( $permissions, $granting_user ) = split( /\//, $raw_permissions ); # Break down permissions to individual flags if ( $permissions =~ /a/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'INSERT'} = 1; } if ( $permissions =~ /r/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'SELECT'} = 1; } if ( $permissions =~ /w/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'UPDATE'} = 1; } if ( $permissions =~ /d/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'DELETE'} = 1; } if ( $permissions =~ /R/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'RULE'} = 1; } if ( $permissions =~ /x/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'REFERENCES'} = 1; } if ( $permissions =~ /t/ ) { $struct->{$schema}{'TABLE'}{$relname}{'ACL'}{$user} {'TRIGGER'} = 1; } } } # Primitive Stats, but only if requested if ( $statistics == 1 and $tables->{'reltype'} eq 'table' ) { $sth_Table_Statistics->execute($reloid); my $stats = $sth_Table_Statistics->fetchrow_hashref; $struct->{$schema}{'TABLE'}{$relname}{'TABLELEN'} = $stats->{'table_len'}; $struct->{$schema}{'TABLE'}{$relname}{'TUPLECOUNT'} = $stats->{'tuple_count'}; $struct->{$schema}{'TABLE'}{$relname}{'TUPLELEN'} = $stats->{'tuple_len'}; $struct->{$schema}{'TABLE'}{$relname}{'DEADTUPLELEN'} = $stats->{'dead_tuple_len'}; $struct->{$schema}{'TABLE'}{$relname}{'FREELEN'} = $stats->{'free_space'}; } # Store the relation type $struct->{$schema}{'TABLE'}{$relname}{'TYPE'} = $tables->{'reltype'}; # Store table description $struct->{$schema}{'TABLE'}{$relname}{'DESCRIPTION'} = $tables->{'table_description'}; # Store the view definition $struct->{$schema}{'TABLE'}{$relname}{'VIEW_DEF'} = $tables->{'view_definition'}; # Store constraints $sth_Constraint->execute($reloid); while ( my $cols = $sth_Constraint->fetchrow_hashref ) { my $constraint_name = $cols->{'constraint_name'}; $struct->{$schema}{'TABLE'}{$relname}{'CONSTRAINT'} {$constraint_name} = $cols->{'constraint_source'}; } $sth_Columns->execute($reloid); my $i = 1; while ( my $cols = $sth_Columns->fetchrow_hashref ) { my $column_name = $cols->{'column_name'}; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'ORDER'} = $cols->{'attnum'}; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'PRIMARY KEY'} = 0; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'FKTABLE'} = ''; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'TYPE'} = $cols->{'column_type'}; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'NULL'} = $cols->{'column_null'}; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'DESCRIPTION'} = $cols->{'column_description'}; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column_name} {'DEFAULT'} = $cols->{'column_default'}; } # Pull out both PRIMARY and UNIQUE keys based on the supplied query # and the relation OID. # # Since there may be multiple UNIQUE indexes on a table, we append a # number to the end of the the UNIQUE keyword which shows that they # are a part of a related definition. I.e UNIQUE_1 goes with UNIQUE_1 # $sth_Primary_Keys->execute($reloid); my $unqgroup = 0; while ( my $pricols = $sth_Primary_Keys->fetchrow_hashref ) { my $index_type = $pricols->{'constraint_type'}; my $con = $pricols->{'constraint_name'}; my $indexdef = $pricols->{'constraint_definition'}; # Fetch the column list my $column_list = $indexdef; $column_list =~ s/.*\(([^)]+)\).*/$1/g; # Split our column list and deal with all PRIMARY KEY fields my @collist = split( ',', $column_list ); # Store the column number in the indextype field. Anything > 0 # indicates the column has this type of constraint applied to it. my $column; my $currentcol = $#collist + 1; my $numcols = $#collist + 1; # Bump group number if there are two or more columns if ( $numcols >= 2 && $index_type eq 'UNIQUE' ) { $unqgroup++; } # Record the data to the structure. while ( $column = pop(@collist) ) { $column =~ s/\s$//; $column =~ s/^\s//; $column =~ s/^"//; $column =~ s/"$//; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'TYPE'} = $index_type; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'COLNUM'} = $currentcol--; # Record group number only when a multi-column # constraint is involved if ( $numcols >= 2 && $index_type eq 'UNIQUE' ) { $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column} {'CON'}{$con}{'KEYGROUP'} = $unqgroup; } } } # FOREIGN KEYS like UNIQUE indexes can appear several times in # a table in multi-column format. We use the same trick to # record a numeric association to the foreign key reference. $sth_Foreign_Keys->execute($reloid); my $fkgroup = 0; while ( my $forcols = $sth_Foreign_Keys->fetchrow_hashref ) { my $column_oid = $forcols->{'oid'}; my $con = $forcols->{'constraint_name'}; # Declare variables for dataload my @keylist; my @fkeylist; my $fschema; my $ftable; my $fkey = $forcols->{'constraint_fkey'}; my $keys = $forcols->{'constraint_key'}; my $frelid = $forcols->{'foreignrelid'}; # Since decent array support was not added until 7.4, and # we want to support 7.3 as well, we parse the text version # of the array by hand rather than combining this and # Foreign_Key_Arg query into a single query. my @fkeyset; if ( ref $fkey eq 'ARRAY' ) { @fkeyset = @{$fkey}; } else { # DEPRECATED: DBD::Pg 1.49 and earlier $fkey =~ s/^{//g; $fkey =~ s/}$//g; $fkey =~ s/"//g; @fkeyset = split( /,/, $fkey ); } my @keyset; if ( ref $keys eq 'ARRAY' ) { @keyset = @{$keys}; } else { # DEPRECATED: DBD::Pg 1.49 and earlier $keys =~ s/^{//g; $keys =~ s/}$//g; $keys =~ s/"//g; @keyset = split( /,/, $keys ); } # Convert the list of column numbers into column names for the # local side. foreach my $k (@keyset) { $sth_Foreign_Key_Arg->execute( $reloid, $k ); my $row = $sth_Foreign_Key_Arg->fetchrow_hashref; push( @keylist, $row->{'attribute_name'} ); } # Convert the list of columns numbers into column names # for the referenced side. Grab the table and namespace # while we're here. foreach my $k (@fkeyset) { $sth_Foreign_Key_Arg->execute( $frelid, $k ); my $row = $sth_Foreign_Key_Arg->fetchrow_hashref; push( @fkeylist, $row->{'attribute_name'} ); $fschema = $row->{'namespace'}; $ftable = $row->{'relation_name'}; } # Deal with common catalog issues. die "FKEY $con Broken -- fix your PostgreSQL installation" if $#keylist != $#fkeylist; # Load up the array based on the information discovered # using the information retrieval methods above. my $numcols = $#keylist + 1; my $currentcol = $#keylist + 1; # Bump group number if there are two or more columns involved if ( $numcols >= 2 ) { $fkgroup++; } # Record the foreign key to structure while ( my $column = pop(@keylist) and my $fkey = pop(@fkeylist) ) { $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'TYPE'} = 'FOREIGN KEY'; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'COLNUM'} = $currentcol--; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'FKTABLE'} = $ftable; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'FKSCHEMA'} = $fschema; $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column}{'CON'} {$con}{'FK-COL NAME'} = $fkey; # Record group number only when a multi-column # constraint is involved if ( $numcols >= 2 ) { $struct->{$schema}{'TABLE'}{$relname}{'COLUMN'}{$column} {'CON'}{$con}{'KEYGROUP'} = $fkgroup; } } } # Pull out index information $sth_Indexes->execute( $schema, $relname ); while ( my $idx = $sth_Indexes->fetchrow_hashref ) { $struct->{$schema}{'TABLE'}{$relname}{'INDEX'} { $idx->{'indexname'} } = $idx->{'indexdef'}; } # Extract Inheritance information $sth_Inheritance->execute( $schema, $relname ); while ( my $inherit = $sth_Inheritance->fetchrow_hashref ) { my $parSch = $inherit->{'par_schemaname'}; my $parTab = $inherit->{'par_tablename'}; $struct->{$schema}{'TABLE'}{$relname}{'INHERIT'}{$parSch}{$parTab} = 1; } } # Function Handling $sth_Function->execute(); while ( my $functions = $sth_Function->fetchrow_hashref and not $table_out ) { my $schema = $functions->{'namespace'}; my $comment = $functions->{'comment'}; my $functionargs = $functions->{'function_args'}; my @types = split( ' ', $functionargs ); my $count = 0; # Pre-setup argument names when available. my $argnames = $functions->{'function_arg_names'}; # Setup full argument types including the parameter name my @parameters; for my $type (@types) { $sth_FunctionArg->execute($type); my $hash = $sth_FunctionArg->fetchrow_hashref; my $parameter = ''; if ($argnames) { $parameter .= sprintf( '%s ', shift( @{$argnames} ) ); } if ( $hash->{'namespace'} ne $system_schema ) { $parameter .= $hash->{'namespace'} . '.'; } $parameter .= $hash->{'type_name'}; push( @parameters, $parameter ); } my $functionname = sprintf( '%s(%s)', $functions->{'function_name'}, join( ', ', @parameters ) ); my $ret_type = $functions->{'returns_set'} ? 'SET OF ' : ''; $sth_FunctionArg->execute( $functions->{'return_type'} ); my $rhash = $sth_FunctionArg->fetchrow_hashref; $ret_type .= $rhash->{'type_name'}; $struct->{$schema}{'FUNCTION'}{$functionname}{'COMMENT'} = $comment; $struct->{$schema}{'FUNCTION'}{$functionname}{'SOURCE'} = $functions->{'source_code'}; $struct->{$schema}{'FUNCTION'}{$functionname}{'LANGUAGE'} = $functions->{'language_name'}; $struct->{$schema}{'FUNCTION'}{$functionname}{'RETURNS'} = $ret_type; } # Deal with the Schema $sth_Schema->execute(); while ( my $schema = $sth_Schema->fetchrow_hashref ) { my $comment = $schema->{'comment'}; my $namespace = $schema->{'namespace'}; $struct->{$namespace}{'SCHEMA'}{'COMMENT'} = $comment; } $sth_Columns->finish(); $sth_Constraint->finish(); $sth_Database->finish(); $sth_Foreign_Keys->finish(); $sth_Foreign_Key_Arg->finish(); $sth_Function->finish(); $sth_FunctionArg->finish(); $sth_Indexes->finish(); $sth_Inheritance->finish(); $sth_Primary_Keys->finish(); $sth_Schema->finish(); $sth_Tables->finish(); $sth_Table_Statistics->finish() if ( $statistics == 1 ); $dbh->disconnect; } ##### # write_using_templates # # Generate structure that HTML::Template requires out of the # $struct for table related information, and $struct for # the schema and function information sub write_using_templates($$$$$$) { my ( $db, $database, $statistics, $template_path, $output_filename_base, $wanted_output, $filter_prog ) = @_; my $struct = $db->{$database}{'STRUCT'}; my @schemas; # Start at 0, increment to 1 prior to use. my $object_id = 0; my %tableids; foreach my $schema ( sort keys %{$struct} ) { my @tables; foreach my $table ( sort keys %{ $struct->{$schema}{'TABLE'} } ) { # Column List my @columns; foreach my $column ( sort { $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$a} {'ORDER'} <=> $struct->{$schema}{'TABLE'}{$table} {'COLUMN'}{$b}{'ORDER'} } keys %{ $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} } ) { my $inferrednotnull = 0; # Have a shorter default for places that require it my $shortdefault = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'DEFAULT'}; $shortdefault =~ s/^(.{17}).{5,}(.{5})$/$1 ... $2/g if ( defined($shortdefault) ); # Deal with column constraints my @colconstraints; foreach my $con ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'} } ) { if ( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'TYPE'} eq 'UNIQUE' ) { my $unq = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'TYPE'}; my $unqcol = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'COLNUM'}; my $unqgroup = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'KEYGROUP'}; push @colconstraints, { column_unique => $unq, column_unique_colnum => $unqcol, column_unique_keygroup => $unqgroup, }; } elsif ( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'TYPE'} eq 'PRIMARY KEY' ) { $inferrednotnull = 1; push @colconstraints, { column_primary_key => 'PRIMARY KEY', }; } elsif ( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'TYPE'} eq 'FOREIGN KEY' ) { my $fksgmlid = sgml_safe_id( join( '.', $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} {$column}{'CON'}{$con}{'FKSCHEMA'}, $struct->{$schema}{'TABLE'}{$table}{'TYPE'}, $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} {$column}{'CON'}{$con}{'FKTABLE'} ) ); my $fkgroup = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'KEYGROUP'}; my $fktable = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FKTABLE'}; my $fkcol = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FK-COL NAME'}; my $fkschema = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FKSCHEMA'}; push @colconstraints, { column_fk => 'FOREIGN KEY', column_fk_colnum => $fkcol, column_fk_keygroup => $fkgroup, column_fk_schema => $fkschema, column_fk_schema_dbk => docbook($fkschema), column_fk_schema_dot => graphviz($fkschema), column_fk_sgmlid => $fksgmlid, column_fk_table => $fktable, column_fk_table_dbk => docbook($fktable), }; # only have the count if there is more than 1 schema if ( scalar( keys %{$struct} ) > 1 ) { $colconstraints[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } } } # Generate the Column array push @columns, { column => $column, column_dbk => docbook($column), column_dot => graphviz($column), column_default => $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'DEFAULT'}, column_default_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'DEFAULT'} ), column_default_short => $shortdefault, column_default_short_dbk => docbook($shortdefault), column_comment => filter_comment( $filter_prog, $wanted_output, $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'DESCRIPTION'} ), column_comment_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'DESCRIPTION'} ), column_number => $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'ORDER'}, column_type => $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'TYPE'}, column_type_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'TYPE'} ), column_type_dot => graphviz( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'TYPE'} ), column_constraints => \@colconstraints, }; if ( $inferrednotnull == 0 ) { $columns[-1]{"column_constraint_notnull"} = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'NULL'}; } } # Constraint List my @constraints; foreach my $constraint ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'CONSTRAINT'} } ) { my $shortcon = $struct->{$schema}{'TABLE'}{$table}{'CONSTRAINT'} {$constraint}; $shortcon =~ s/^(.{30}).{5,}(.{5})$/$1 ... $2/g; push @constraints, { constraint => $struct->{$schema}{'TABLE'}{$table}{'CONSTRAINT'} {$constraint}, constraint_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'CONSTRAINT'} {$constraint} ), constraint_name => $constraint, constraint_name_dbk => docbook($constraint), constraint_short => $shortcon, constraint_short_dbk => docbook($shortcon), table => $table, table_dbk => docbook($table), table_dot => graphviz($table), }; } # Index List my @indexes; foreach my $index ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'INDEX'} } ) { push @indexes, { index_definition => $struct->{$schema}{'TABLE'}{$table}{'INDEX'}{$index}, index_definition_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'INDEX'}{$index} ), index_name => $index, index_name_dbk => docbook($index), table => $table, table_dbk => docbook($table), table_dot => graphviz($table), schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), }; } my @inherits; foreach my $inhSch ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'INHERIT'} } ) { foreach my $inhTab ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'INHERIT'}{$inhSch} } ) { push @inherits, { table => $table, table_dbk => docbook($table), table_dot => graphviz($table), schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), sgmlid => sgml_safe_id( join( '.', $schema, 'table', $table ) ), parent_sgmlid => sgml_safe_id( join( '.', $inhSch, 'table', $inhTab ) ), parent_table => $inhTab, parent_table_dbk => docbook($inhTab), parent_table_dot => graphviz($inhTab), parent_schema => $inhSch, parent_schema_dbk => docbook($inhSch), parent_schema_dot => graphviz($inhSch), }; } } # Foreign Key Discovery # # $lastmatch is used to ensure that we only supply a result a # single time and not once for each link found. Since the # loops are sorted, we only need to track the last element, and # not all supplied elements. my @fk_schemas; my $lastmatch = ''; foreach my $fk_schema ( sort keys %{$struct} ) { foreach my $fk_table ( sort keys %{ $struct->{$fk_schema}{'TABLE'} } ) { foreach my $fk_column ( sort keys %{ $struct->{$fk_schema}{'TABLE'}{$fk_table}{'COLUMN'} } ) { foreach my $fk_con ( sort keys %{ $struct->{$fk_schema}{'TABLE'}{$fk_table} {'COLUMN'}{$fk_column}{'CON'} } ) { if ( $struct->{$fk_schema}{'TABLE'}{$fk_table} {'COLUMN'}{$fk_column}{'CON'}{$fk_con}{'TYPE'} eq 'FOREIGN KEY' and $struct->{$fk_schema}{'TABLE'}{$fk_table} {'COLUMN'}{$fk_column}{'CON'}{$fk_con} {'FKTABLE'} eq $table and $struct->{$fk_schema}{'TABLE'}{$fk_table} {'COLUMN'}{$fk_column}{'CON'}{$fk_con} {'FKSCHEMA'} eq $schema and $lastmatch ne "$fk_schema$fk_table" ) { my $fksgmlid = sgml_safe_id( join( '.', $fk_schema, $struct->{$fk_schema}{'TABLE'} {$fk_table}{'TYPE'}, $fk_table ) ); push @fk_schemas, { fk_column_number => $struct->{$fk_schema}{'TABLE'}{$fk_table} {'COLUMN'}{$fk_column}{'ORDER'}, fk_sgmlid => $fksgmlid, fk_schema => $fk_schema, fk_schema_dbk => docbook($fk_schema), fk_schema_dot => graphviz($fk_schema), fk_table => $fk_table, fk_table_dbk => docbook($fk_table), fk_table_dot => graphviz($fk_table), }; # only have the count if there is more than 1 schema if ( scalar( keys %{$struct} ) > 1 ) { $fk_schemas[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } $lastmatch = "$fk_schema$fk_table"; } } } } } # List off permissions my @permissions; foreach my $user ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'ACL'} } ) { push @permissions, { schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), table => $table, table_dbk => docbook($table), table_dot => graphviz($table), user => $user, user_dbk => docbook($user), }; # only have the count if there is more than 1 schema if ( scalar( keys %{$struct} ) > 1 ) { $permissions[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } foreach my $perm ( keys %{ $struct->{$schema}{'TABLE'}{$table}{'ACL'}{$user} } ) { if ( $struct->{$schema}{'TABLE'}{$table}{'ACL'}{$user} {$perm} == 1 ) { $permissions[-1]{ lower($perm) } = 1; } } } # Increment and record the object ID $tableids{"$schema$table"} = ++$object_id; my $viewdef = sql_prettyprint( $struct->{$schema}{'TABLE'}{$table}{'VIEW_DEF'} ); # Truncate comment for Dia my $comment_dia = $struct->{$schema}{'TABLE'}{$table}{'DESCRIPTION'}; $comment_dia =~ s/^(.{35}).{5,}(.{5})$/$1 ... $2/g if ( defined($comment_dia) ); push @tables, { object_id => $object_id, object_id_dbk => docbook($object_id), schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), schema_sgmlid => sgml_safe_id( $schema . ".schema" ), # Statistics stats_enabled => $statistics, stats_dead_bytes => useUnits( $struct->{$schema}{'TABLE'}{$table}{'DEADTUPLELEN'} ), stats_dead_bytes_dbk => docbook( useUnits( $struct->{$schema}{'TABLE'}{$table}{'DEADTUPLELEN'} ) ), stats_free_bytes => useUnits( $struct->{$schema}{'TABLE'}{$table}{'FREELEN'} ), stats_free_bytes_dbk => docbook( useUnits( $struct->{$schema}{'TABLE'}{$table}{'FREELEN'} ) ), stats_table_bytes => useUnits( $struct->{$schema}{'TABLE'}{$table}{'TABLELEN'} ), stats_table_bytes_dbk => docbook( useUnits( $struct->{$schema}{'TABLE'}{$table}{'TABLELEN'} ) ), stats_tuple_count => $struct->{$schema}{'TABLE'}{$table}{'TUPLECOUNT'}, stats_tuple_count_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'TUPLECOUNT'} ), stats_tuple_bytes => useUnits( $struct->{$schema}{'TABLE'}{$table}{'TUPLELEN'} ), stats_tuple_bytes_dbk => docbook( useUnits( $struct->{$schema}{'TABLE'}{$table}{'TUPLELEN'} ) ), table => $table, table_dbk => docbook($table), table_dot => graphviz($table), table_sgmlid => sgml_safe_id( join( '.', $schema, $struct->{$schema}{'TABLE'}{$table}{'TYPE'}, $table ) ), table_comment => filter_comment( $filter_prog, $wanted_output, $struct->{$schema}{'TABLE'}{$table}{'DESCRIPTION'} ), table_comment_dbk => docbook( $struct->{$schema}{'TABLE'}{$table}{'DESCRIPTION'} ), table_comment_dia => $comment_dia, view_definition => $viewdef, view_definition_dbk => docbook($viewdef), columns => \@columns, constraints => \@constraints, fk_schemas => \@fk_schemas, indexes => \@indexes, inherits => \@inherits, permissions => \@permissions, }; # only have the count if there is more than 1 schema if ( scalar( keys %{$struct} ) > 1 ) { $tables[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } } # Dump out list of functions my @functions; foreach my $function ( sort keys %{ $struct->{$schema}{'FUNCTION'} } ) { push @functions, { function => $function, function_dbk => docbook($function), function_sgmlid => sgml_safe_id( join( '.', $schema, 'function', $function ) ), function_comment => filter_comment( $filter_prog, $wanted_output, $struct->{$schema}{'FUNCTION'}{$function}{'COMMENT'} ), function_comment_dbk => docbook( $struct->{$schema}{'FUNCTION'}{$function}{'COMMENT'} ), function_language => uc( $struct->{$schema}{'FUNCTION'}{$function}{'LANGUAGE'} ), function_returns => $struct->{$schema}{'FUNCTION'}{$function}{'RETURNS'}, function_source => $struct->{$schema}{'FUNCTION'}{$function}{'SOURCE'}, schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), schema_sgmlid => sgml_safe_id( $schema . ".schema" ), }; # only have the count if there is more than 1 schema if ( scalar( keys %{$struct} ) > 1 ) { $functions[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } } push @schemas, { schema => $schema, schema_dbk => docbook($schema), schema_dot => graphviz($schema), schema_sgmlid => sgml_safe_id( $schema . ".schema" ), schema_comment => filter_comment( $filter_prog, $wanted_output, $struct->{$schema}{'SCHEMA'}{'COMMENT'} ), schema_comment_dbk => docbook( $struct->{$schema}{'SCHEMA'}{'COMMENT'} ), functions => \@functions, tables => \@tables, }; # Build the array of schemas if ( scalar( keys %{$struct} ) > 1 ) { $schemas[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } } # Link the various components together via the template. my @fk_links; my @fkeys; foreach my $schema ( sort keys %{$struct} ) { foreach my $table ( sort keys %{ $struct->{$schema}{'TABLE'} } ) { foreach my $column ( sort { $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$a} {'ORDER'} <=> $struct->{$schema}{'TABLE'}{$table} {'COLUMN'}{$b}{'ORDER'} } keys %{ $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} } ) { foreach my $con ( sort keys %{ $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'} } ) { # To prevent a multi-column foreign key from appearing # several times, we've opted # to simply display the first column of any given key. # Since column numbering always starts at 1 # for foreign keys. if ( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'TYPE'} eq 'FOREIGN KEY' && $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} {$column}{'CON'}{$con}{'COLNUM'} == 1 ) { # Pull out some of the longer keys my $ref_table = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FKTABLE'}; my $ref_schema = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FKSCHEMA'}; my $ref_column = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'}{$column} {'CON'}{$con}{'FK-COL NAME'}; # Default values cause these elements to attach # to the bottom in Dia # If a KEYGROUP is not defined, it's a single column. # Modify the ref_con and key_con variables to attach # the to the columns connection point directly. my $ref_con = 0; my $key_con = 0; my $keycon_offset = 0; if ( !defined( $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} {$column}{'CON'}{$con}{'KEYGROUP'} ) ) { $ref_con = $struct->{$ref_schema}{'TABLE'}{$ref_table} {'COLUMN'}{$ref_column}{'ORDER'} || 0; $key_con = $struct->{$schema}{'TABLE'}{$table}{'COLUMN'} {$column}{'ORDER'} || 0; $keycon_offset = 1; } # Bump object_id $object_id++; push @fk_links, { fk_link_name => $con, fk_link_name_dbk => docbook($con), fk_link_name_dot => graphviz($con), handle0_connection => $key_con, handle0_connection_dbk => docbook($key_con), handle0_connection_dia => 6 + ( $key_con * 2 ), handle0_name => $table, handle0_name_dbk => docbook($table), handle0_schema => $schema, handle0_to => $tableids{"$schema$table"}, handle0_to_dbk => docbook( $tableids{"$schema$table"} ), handle1_connection => $ref_con, handle1_connection_dbk => docbook($ref_con), handle1_connection_dia => 6 + ( $ref_con * 2 ) + $keycon_offset, handle1_name => $ref_table, handle1_name_dbk => docbook($ref_table), handle1_schema => $ref_schema, handle1_to => $tableids{"$ref_schema$ref_table"}, handle1_to_dbk => docbook( $tableids{"$ref_schema$ref_table"} ), object_id => $object_id, object_id_dbk => docbook($object_id), }; # Build the array of schemas if ( scalar( keys %{$struct} ) > 1 ) { $fk_links[-1]{"number_of_schemas"} = scalar( keys %{$struct} ); } } } } } } # Make database level comment information my @timestamp = localtime(); my $dumped_on = sprintf( "%04d-%02d-%02d", $timestamp[5] + 1900, $timestamp[4] + 1, $timestamp[3] ); my $database_comment = $db->{$database}{'COMMENT'}; # Loop through each template found in the supplied path. # Output the results of the template as . # into the current working directory. my @template_files = glob( $template_path . '/*.tmpl' ); # Ensure we've told the user if we don't find any files. triggerError("Templates files not found in $template_path") if ( $#template_files < 0 ); # Process all found templates. foreach my $template_file (@template_files) { ( my $file_extension = $template_file ) =~ s/^(?:.*\/|)([^\/]+)\.tmpl$/$1/; next if ( defined($wanted_output) && $file_extension ne $wanted_output ); my $output_filename = "$output_filename_base.$file_extension"; print "Producing $output_filename from $template_file\n"; my $template = HTML::Template->new( filename => $template_file, die_on_bad_params => 0, global_vars => 0, strict => 1, loop_context_vars => 1 ); $template->param( database => $database, database_dbk => docbook($database), database_sgmlid => sgml_safe_id($database), database_comment => filter_comment( $filter_prog, $wanted_output, $database_comment), database_comment_dbk => docbook($database_comment), dumped_on => $dumped_on, dumped_on_dbk => docbook($dumped_on), fk_links => \@fk_links, schemas => \@schemas, ); sysopen( FH, $output_filename, O_WRONLY | O_TRUNC | O_CREAT, 0644 ) or die "Can't open $output_filename: $!"; print FH $template->output(); } } ###### # sgml_safe_id # Safe SGML ID Character replacement sub sgml_safe_id($) { my $string = shift; # Lets use the keyword ARRAY in place of the square brackets # to prevent duplicating a non-array equivelent $string =~ s/\[\]/ARRAY-/g; # Brackets, spaces, commads, underscores are not valid 'id' characters # replace with as few -'s as possible. $string =~ s/[ "',)(_-]+/-/g; # Don't want a - at the end either. It looks silly. $string =~ s/-$//g; return ($string); } ##### # lower # LowerCase the string sub lower($) { my $string = shift; $string =~ tr/A-Z/a-z/; return ($string); } ##### # useUnits # Tack on base 2 metric units sub useUnits($) { my ($value) = @_; return '' if ( !defined($value) ); my @units = ( 'Bytes', 'KiBytes', 'MiBytes', 'GiBytes', 'TiBytes' ); my $loop = 0; while ( $value >= 1024 ) { $loop++; $value = $value / 1024; } return ( sprintf( "%.2f %s", $value, $units[$loop] ) ); } ##### # docbook # Docbook output is special in that we may or may not want to escape # the characters inside the string depending on a string prefix. sub docbook($) { my $string = shift; if ( defined($string) ) { if ( $string =~ /^\@DOCBOOK/ ) { $string =~ s/^\@DOCBOOK//; } else { $string =~ s/&(?!(amp|lt|gr|apos|quot);)/&/g; $string =~ s//>/g; $string =~ s/'/'/g; $string =~ s/"/"/g; } } else { # Return an empty string when all else fails $string = ''; } return ($string); } ##### # graphviz # GraphViz output requires that special characters (like " and whitespace) must be preceeded # by a \ when a part of a lable. sub graphviz($) { my $string = shift; # Ensure we don't return an least a empty string $string = '' if ( !defined($string) ); $string =~ s/([\s"'])/\\$1/g; return ($string); } ##### # sql_prettyprint # Clean up SQL into something presentable sub sql_prettyprint($) { my $string = shift; # If nothing has been sent in, return an empty string if ( !defined($string) ) { return ''; } # Initialize Result string my $result = ''; # List of tokens to split on my $tok = "SELECT|FROM|WHERE|HAVING|GROUP BY|ORDER BY|OR|AND|LEFT JOIN|RIGHT JOIN" . "|LEFT OUTER JOIN|LEFT INNER JOIN|INNER JOIN|RIGHT OUTER JOIN|RIGHT INNER JOIN" . "|JOIN|UNION ALL|UNION|EXCEPT|USING|ON|CAST|[\(\),]"; my $key = 0; my $bracket = 0; my $depth = 0; my $indent = 6; # XXX: Split is wrong -- match would do foreach my $elem ( split( /(\"[^\"]*\"|'[^']*'|$tok)/, $string ) ) { my $format; # Skip junk tokens if ( $elem =~ /^[\s]?$/ ) { next; } # NOTE: Should we drop leading spaces? # $elem =~ s/^\s//; # Close brackets are special # Bring depth in a level if ( $elem =~ /\)/ ) { $depth = $depth - $indent; if ( $key == 1 or $bracket == 1 ) { $format = "%s%s"; } else { $format = "%s\n%" . $depth . "s"; } $key = 0; $bracket = 0; } # Open brackets are special # Bump depth out a level elsif ( $elem =~ /\(/ ) { if ( $key == 1 ) { $format = "%s %s"; } else { $format = "%s\n%" . $depth . "s"; } $depth = $depth + $indent; $bracket = 1; $key = 0; } # Key element # Token from our list -- format on left hand side of the equation # when appropriate. elsif ( $elem =~ /$tok/ ) { if ( $key == 1 ) { $format = "%s%s"; } else { $format = "%s\n%" . $depth . "s"; } $key = 1; $bracket = 0; } # Value # Format for right hand side of the equation else { $format = "%s%s"; $key = 0; } # Add the new format string to the result $result = sprintf( $format, $result, $elem ); } return $result; } ## # triggerError # Print out a supplied error message and exit the script. sub triggerError($) { my ($error) = @_; # Test error if ( !defined($error) || $error eq '' ) { # Suppress prototype checking in call to self &triggerError("triggerError: Unknown error"); } printf( "\n\n%s\n", $error ); exit 2; } sub filter_comment($$$) { my ($filter_prog, $wanted_output, $comment) = @_; if ($filter_prog) { if ($comment) { my ($tmp, $fn) = tempfile(UNLINK => 1); print($tmp $comment); $tmp->flush; $comment = `$filter_prog $wanted_output $fn`; } } return $comment; } ##### # usage sub usage($$$) { my ( $basename, $database, $dbuser ) = @_; print < Specify database name to connect to (default: $database) -f Specify output file prefix (default: $database) -h Specify database server host (default: localhost) -p Specify database server port (default: 5432) -u Specify database username (default: $dbuser) --password= Specify database password (default: blank) --password Have $basename prompt for a password -l Path to the templates (default: @@TEMPLATE-DIR@@) -t Type of output wanted (default: All in template library) -s Specify a specific schema to match. Technically this is a regular expression but anything other than a specific name may have unusual results. -m Show only tables/objects with names matching the specified regular expression. --table= Tables to export. Multiple tables may be provided using a comma-separated list. I.e. table,table2,table3 --statistics In 7.4 and later, with the contrib module pgstattuple installed we can gather statistics on the tables in the database (average size, free space, disk space used, dead tuple counts, etc.) This is disk intensive on large databases as all pages must be visited. --comment-filter= Processes comments with a program. This is useful when comments are written using some markup languague (wiki/markdown), it allows conversion of structured text to desired output format. For each comment is executed command , where is the desired output format as specified by -t, is path to file with original comment text. Processor should read this file, do processing and write result to standard output. USAGE ; exit 1; } sub single_quote { my $attr = $_; $attr =~ s/^\s+|\s+$//g; return qq{'$attr'}; } ## # Kick off execution of main() main($ARGV); postgresql-autodoc-1.41+20200921/regressdatabase.sql000066400000000000000000000072461373214421700220660ustar00rootroot00000000000000-- -- $Id: regressdatabase.sql,v 1.2 2006/02/13 01:15:56 rbt Exp $ -- BEGIN; -- -- Foreign key'd structure, check constraints, primary keys -- and duplicate table names in different schemas -- CREATE SCHEMA product CREATE TABLE product ( product_id SERIAL PRIMARY KEY , product_code text NOT NULL UNIQUE CHECK(product_code = upper(product_code)) , product_description text ); CREATE SCHEMA store CREATE TABLE store ( store_id SERIAL PRIMARY KEY , store_code text NOT NULL UNIQUE CHECK(store_code = upper(store_code)) , store_description text ) CREATE TABLE inventory ( store_id integer REFERENCES store ON UPDATE CASCADE ON DELETE RESTRICT , product_id integer REFERENCES product.product ON UPDATE CASCADE ON DELETE RESTRICT , PRIMARY KEY(store_id, product_id) , quantity integer NOT NULL CHECK(quantity > 0) ); -- -- Another schema with -- CREATE SCHEMA warehouse CREATE TABLE warehouse ( warehouse_id SERIAL PRIMARY KEY , warehouse_code text NOT NULL UNIQUE CHECK(warehouse_code = upper(warehouse_code)) , warehouse_manager text NOT NULL , warehouse_supervisor text UNIQUE , warehouse_description text , CHECK (upper(warehouse_manager) != upper(warehouse_supervisor)) ) CREATE TABLE inventory ( warehouse_id integer REFERENCES warehouse ON UPDATE CASCADE ON DELETE RESTRICT , product_id integer REFERENCES product.product ON UPDATE CASCADE ON DELETE RESTRICT , PRIMARY KEY(warehouse_id, product_id) , quantity integer NOT NULL CHECK(quantity > 0) ) CREATE VIEW products AS SELECT DISTINCT product.* FROM inventory JOIN product.product USING (product_id); -- Sample index CREATE INDEX quantity_index ON warehouse.inventory (quantity); -- -- Simple text comments -- --COMMENT ON DATABASE IS --'This database has been created for the purpose of simple -- tests on PostgreSQL Autodoc.'; COMMENT ON SCHEMA product IS 'This schema stores a list of products and information about the product'; COMMENT ON SCHEMA warehouse IS 'A list of warehouses and information on warehouses'; COMMENT ON TABLE warehouse.inventory IS 'Warehouse inventory'; COMMENT ON TABLE store.inventory IS 'Store inventory'; COMMENT ON COLUMN warehouse.warehouse.warehouse_code IS 'Internal code which represents warehouses for invoice purposes'; COMMENT ON COLUMN warehouse.warehouse.warehouse_supervisor IS 'Supervisors name for a warehouse when one has been assigned. The same supervisor may not be assigned to more than one warehouse, per company policy XYZ.'; COMMENT ON COLUMN warehouse.warehouse.warehouse_manager IS 'Name of Warehouse Manager'; -- -- A few simple functions -- CREATE FUNCTION product.worker(integer, integer) RETURNS integer AS 'SELECT $1 + $1;' LANGUAGE sql; CREATE FUNCTION warehouse.worker(integer, integer) RETURNS integer AS 'SELECT $1 * $1;' LANGUAGE sql; COMMENT ON FUNCTION product.worker(integer, integer) IS 'Worker function appropriate for products'; COMMENT ON FUNCTION warehouse.worker(integer, integer) IS 'Worker function appropriate for warehouses.'; create or replace function product.worker2 (i_1 integer, i_2 integer) returns integer as $$ begin return i_1 + i_2; end $$ language plpgsql; comment on function product.worker2 (i_1 integer, i_2 integer) is 'worker function that uses named parameters'; -- -- Inheritance -- CREATE SCHEMA inherit CREATE TABLE taba (cola integer) CREATE TABLE tabb (colb integer) inherits(taba) CREATE TABLE tab1 (col1 integer) CREATE TABLE tab1b (col1b integer) inherits(tab1, tabb); COMMIT; postgresql-autodoc-1.41+20200921/xml.tmpl000066400000000000000000000203051373214421700176730ustar00rootroot00000000000000 %myent; ]> " xreflabel=" database schema"><!-- TMPL_VAR name="database_dbk" --> Model " xreflabel=""> Schema <!-- TMPL_VAR name="schema_dbk" -->
" xreflabel="."> -title"> <!-- TMPL_IF name="view_definition" -->View: <database class="table"><!-- TMPL_VAR name="table_dbk" --></database> <!-- TMPL_ELSE -->Table: <database class="view"><!-- TMPL_VAR name="table_dbk" --></database> <!-- /TMPL_IF name="view_definition" --> Structure of <database class="table"><!-- TMPL_VAR name="table_dbk" --></database> PRIMARY KEY UNIQUE# NOT NULL DEFAULT REFERENCES "/> Constraints on <!-- TMPL_VAR name="table_dbk" --> Indexes on <!-- TMPL_VAR name="table_dbk" --> Tables referencing <!-- TMPL_IF name="number_of_schemas" --><!-- TMPL_VAR ESCAPE="HTML" name="fk_schema_dbk" -->.<!-- /TMPL_IF name="number_of_schemas" --><!-- TMPL_VAR ESCAPE="HTML" name="fk_table_dbk" --> via Foreign Key Constraints "/>
Definition of view <!-- TMPL_VAR name="table_dbk" -->
Permissions on <!-- TMPL_IF name="number_of_schemas" --><!-- TMPL_VAR ESCAPE="HTML" name="schema" -->.<!-- /TMPL_IF name="number_of_schemas" --><!-- TMPL_VAR name="table_dbk" --> Select Insert Update Delete Rule References Trigger
-->
" xreflabel="()"> -title"> <!-- TMPL_VAR name="function_dbk" --> -titleabbrev"> Function Properties Language Return Type
postgresql-autodoc-1.41+20200921/zigzag.dia.tmpl000066400000000000000000000212731373214421700211270ustar00rootroot00000000000000 "> ## ## ## #PK # ## ## ## ## ## "> ## ## " connection=""/> " connection=""/>