pax_global_header00006660000000000000000000000064144343306640014521gustar00rootroot0000000000000052 comment=d90375b6696fe052a18faa1702711b346819fbef hypopg-1.4.0/000077500000000000000000000000001443433066400130315ustar00rootroot00000000000000hypopg-1.4.0/.gitignore000066400000000000000000000000621443433066400150170ustar00rootroot00000000000000*.o *.a *.so *.pc *~ .*.sw? hypopg-*.zip results/ hypopg-1.4.0/CHANGELOG.md000066400000000000000000000115401443433066400146430ustar00rootroot00000000000000Changelog ========= 2023-05-27 version 1.4.0: ------------------------- **New features**: - Support hypothetically hiding existing indexes, hypothetical or not (github user nutvii and Julien Rouhaud) **Miscellaneous**: - Have hypopg_relation_size() error out rather than returning 0 if called for an oid that isn't a hypothetical index oid - Slighthly reduce memory usage for hypothetical btree indexes without INCLUDE keys 2021-06-21 version 1.3.1: ------------------------- **Miscellaneous**: - Fix compatibility with PostgreSQL 14 beta 2 2021-06-04 version 1.3.0: ------------------------- **New features**: - Add support for hypothetical hash indexes (pg10+) 2021-02-26 version 1.2.0: ------------------------- **New features**: - Make hypopg work on standby servers using a new "fake" oid generator, that borrows Oids in the FirstBootstrapObjectId / FirstNormalObjectId range rather than real oids. If necessary, the old behavior can still be used with the new hypopg.use_real_oids configuration option. **Bug fixes** - Check if access methods support an INCLUDE clause to avoid creating invalid hypothetical indexes. - Display hypothetical indexes on dropped table in hypopg_list_indexes. **Miscellaneous** - Change hypopg_list_indexes() to view hypopg_list_indexes. - Various documentation improvements. 2020-06-24 version 1.1.4: ------------------------- **New features**: - Add support for hypothetical index on partitioned tables **Miscellaneous** - Fix compatibility with PostgreSQL 13 **Bug fixes** - Check that the target relation is a table or a materialized view 2019-06-16 version 1.1.3: ------------------------- **Miscellaneous** - Fix compatibility with PostgreSQL 12 - Don't leak client_encoding change after hypopg extension is created (Michael Kröll) - Use a dedicated MemoryContext to store hypothetical objects - Fix compatibility on Windows (Godwottery) **Bug fixes** - Call previous explain_get_index_name_hook if it was setup - add hypopg_reset_index() SQL function 2018-05-30 version 1.1.2: ------------------------- **New features** - Add support for INCLUDE on hypothetical indexes (pg11+) - Add support for parallel hypothetical index scan (pg11+) **Bug fixes:** - Fix support for pg11, thanks to Christoph Berg for the report 2018-03-20 version 1.1.1: ------------------------- **Bug fixes**: - Fix potentially uninitialized variables, thanks to Jeremy Finzel for the report. - Support hypothetical indexes on materialized view, thanks to Andrew Kane for the report. **Miscellaneous**: - add support for PostgreSQL 11 2017-10-04 version 1.1.0: ------------------------- **New features**: - add support for hypothetical indexes on expression - add a hypopg_get_indexdef() function to get definition of a stored hypothetical index **Bug fixes**: - don't allow hypothetical unique or multi-column index if the AM doesn't support it - disallow hypothetical indexes on system columns (except OID) - fix indexes using DESC clause and default NULLS ordering, thanks to Andrew Kane for the report and test case. - fix PostgreSQL 9.6+ support, thanks to Rob Stolarz for the report **Miscellaneous**: - add support for PostgreSQL 10 2016-10-24 version 1.0.0: ------------------------- - fix memory leak in hypopg() function 2016-07-07 version 0.0.5: ------------------------- - add support for PostgreSQL 9.6, thanks to Konstantin Mosolov for fixing some issues - add support from new bloom access method (9.6+) - fix issue with hypothetical indexes on expression (thanks to Konstantin Mosolov) - fix possible crash in hypothetical index size estimation 2015-11-06 version 0.0.4: ------------------------- - remove the simplified "hypopg_add_index()" function - free memory when hypothetical index creation fails - check that number of column is suitable for a real index - for btree indexes, check that the estimated average row size is small enough to allow a real index creation. - handle BRIN indexes. - handle index storage parameters for supported index methods. - handle index on predicate. - safer handling of locks. 2015-08-08 version 0.0.3: ------------------------- - fix a bug when a regular query could fail after a hypothetical index have been created, and tested with explain. - hypopg_create_index() and hypopg_add_index() now returns the oid and index names. - add hypopg.enabled GUC. It allows disabling HypoPG globally or in a single backend. Thanks to Ronan Dunklau for the patch. 2015-07-08 version 0.0.2: ------------------------- - fix crash when building hypothetical index on expression, thanks to Thom Brown for the report. 2015-06-24 version 0.0.1: ------------------------- - First version of HypoPG. hypopg-1.4.0/CONTRIBUTORS.md000066400000000000000000000006451443433066400153150ustar00rootroot00000000000000People who contributed to hypopg: * Julien Rouhaud * Yuzuko Hosoya * Thom brown * Ronan Dunklau * Мосолов Константин * Andrew Kane * Rob Stolarz * Jeremy Finzel * Christoph Berg * Joel Van Horn * Michael Lroll * Godwottery * Jan Koßmann * Extortioner01 * nagaraju11 * ibrahim edib kokdemir * github user nikhil-postgres * Xiaozhe Yao * Krzysztof Szularz * NutVIIhypopg-1.4.0/LICENSE000066400000000000000000000021201443433066400140310ustar00rootroot00000000000000Portions Copyright (c) 2015-2023, PostgreSQL GLobal Development Group Portions Copyright (c) 1994, The Regents of the University of California Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. hypopg-1.4.0/META.json000066400000000000000000000020351443433066400144520ustar00rootroot00000000000000{ "name": "hypopg", "abstract": "An extension adding hypothetical indexes in PostgreSQL.", "version": "__VERSION__", "maintainer": "Julien Rouhaud ", "license": "postgresql", "release_status": "stable", "provides": { "hypopg": { "abstract": "An extension adding hypothetical indexes in PostgreSQL.", "file": "hypopg.sql", "docfile": "README.md", "version": "__VERSION__" } }, "resources": { "bugtracker": { "web": "http://github.com/hypopg/hypopg/issues/" }, "repository": { "url": "git://github.com/hypopg/hypopg.git", "web": "http://github.com/hypopg/hypopg/", "type": "git" } }, "prereqs": { "runtime": { "requires": { "PostgreSQL": "9.2.0" } } }, "generated_by": "Julien Rouhaud", "meta-spec": { "version": "1.0.0", "url": "http://pgxn.org/meta/spec.txt" }, "tags": [ "index", "hypothetical" ] } hypopg-1.4.0/Makefile000066400000000000000000000033451443433066400144760ustar00rootroot00000000000000EXTENSION = hypopg EXTVERSION = $(shell grep default_version $(EXTENSION).control | sed -e "s/default_version[[:space:]]*=[[:space:]]*'\([^']*\)'/\1/") TESTS = $(wildcard test/sql/*.sql) # More test can be added later, after including pgxs REGRESS = hypopg REGRESS_OPTS = --inputdir=test PG_CONFIG ?= pg_config MODULE_big = hypopg OBJS = hypopg.o \ hypopg_index.o \ import/hypopg_import.o \ import/hypopg_import_index.o all: release-zip: all git archive --format zip --prefix=hypopg-${EXTVERSION}/ --output ./hypopg-${EXTVERSION}.zip HEAD unzip ./hypopg-$(EXTVERSION).zip rm ./hypopg-$(EXTVERSION).zip rm ./hypopg-$(EXTVERSION)/.gitignore rm ./hypopg-$(EXTVERSION)/docs/ -rf rm ./hypopg-$(EXTVERSION)/typedefs.list rm ./hypopg-$(EXTVERSION)/TODO.md sed -i -e "s/__VERSION__/$(EXTVERSION)/g" ./hypopg-$(EXTVERSION)/META.json zip -r ./hypopg-$(EXTVERSION).zip ./hypopg-$(EXTVERSION)/ rm ./hypopg-$(EXTVERSION) -rf DATA = $(wildcard *--*.sql) PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS) ifneq ($(MAJORVERSION),$(filter $(MAJORVERSION), 9.2 9.3 9.4)) REGRESS += hypo_brin endif ifeq ($(MAJORVERSION),10) REGRESS += hypo_index_part_10 endif ifneq ($(MAJORVERSION),$(filter $(MAJORVERSION), 9.2 9.3 9.4 9.5 9.6 10)) REGRESS += hypo_index_part hypo_include endif ifneq ($(MAJORVERSION),$(filter $(MAJORVERSION), 9.2 9.3 9.4 9.5 9.6)) REGRESS += hypo_hash endif REGRESS += hypo_hide_index DEBUILD_ROOT = /tmp/$(EXTENSION) deb: release-zip mkdir -p $(DEBUILD_ROOT) && rm -rf $(DEBUILD_ROOT)/* unzip ./${EXTENSION}-$(EXTVERSION).zip -d $(DEBUILD_ROOT) cd $(DEBUILD_ROOT)/${EXTENSION}-$(EXTVERSION) && make -f debian/rules orig cd $(DEBUILD_ROOT)/${EXTENSION}-$(EXTVERSION) && debuild -us -uc -sa hypopg-1.4.0/README.md000066400000000000000000000166121443433066400143160ustar00rootroot00000000000000HypoPG ======= HypoPG is a PostgreSQL extension adding support for hypothetical indexes. An hypothetical -- or virtual -- index is an index that doesn't really exists, and thus doesn't cost CPU, disk or any resource to create. They're useful to know if specific indexes can increase performance for problematic queries, since you can know if PostgreSQL will use these indexes or not without having to spend resources to create them. For more thorough informations, please consult the [official documentation](https://hypopg.readthedocs.io). For other general information, you can also consult [this blog post](https://rjuju.github.io/postgresql/2015/07/02/how-about-hypothetical-indexes.html). Installation ------------ - Compatible with PostgreSQL 9.2 and above - Needs PostgreSQL header files - Decompress the tarball - `sudo make install` - In every needed database: `CREATE EXTENSION hypopg;` Updating the extension ---------------------- Note that hypopg doesn't provide extension upgrade scripts, as there's no data saved in any of the objects created. Therefore, you need to first drop the extension then create it again to get the new version. Usage ----- NOTE: The hypothetical indexes are contained in a single backend. Therefore, if you add multiple hypothetical indexes, concurrent connections doing `EXPLAIN` won't be bothered by your hypothetical indexes. Assuming a simple test case: rjuju=# CREATE TABLE hypo AS SELECT id, 'line ' || id AS val FROM generate_series(1,10000) id; rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------- Seq Scan on hypo (cost=0.00..180.00 rows=1 width=13) Filter: (id = 1) (2 rows) The easiest way to create an hypothetical index is to use the `hypopg_create_index` functions with a regular `CREATE INDEX` statement as arg. For instance: rjuju=# SELECT * FROM hypopg_create_index('CREATE INDEX ON hypo (id)'); NOTE: Some information from the `CREATE INDEX` statement will be ignored, such as the index name if provided. Some of the ignored information will be handled in a future release. You can check the available hypothetical indexes in your own backend: rjuju=# SELECT * FROM hypopg_list_indexes ; indexrelid | indexname | nspname | relname | amname -----------+-------------------------------------------+---------+---------+-------- 205101 | <41072>btree_hypo_id | public | hypo | btree If you need more technical information on the hypothetical indexes, the `hypopg()` function will return the hypothetical indexes in a similar way as `pg_index` system catalog. And now, let's see if your previous `EXPLAIN` statement would use such an index: rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------------------ Index Scan using <41072>hypo_btree_hypo_id on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) Of course, only `EXPLAIN` without `ANALYZE` will use hypothetical indexes: rjuju=# EXPLAIN ANALYZE SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------------------------------- Seq Scan on hypo (cost=0.00..180.00 rows=1 width=13) (actual time=0.036..6.072 rows=1 loops=1) Filter: (id = 1) Rows Removed by Filter: 9999 Planning time: 0.109 ms Execution time: 6.113 ms (5 rows) To remove your backend's hypothetical indexes, you can use the function `hypopg_drop_index(indexrelid)` with the OID that the `hypopg_list_indexes` view returns and call `hypopg_reset()` to remove all at once, or just close your current connection. Continuing with the above case, you can `hide existing indexes`, but should be use `hypopg_reset()` to clear the previous effects of other indexes at first. Create two real indexes and run `EXPLAIN`: rjuju=# SELECT hypopg_reset(); rjuju=# CREATE INDEX ON hypo(id); rjuju=# CREATE INDEX ON hypo(id, val); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ---------------------------------------------------------------------------------- Index Only Scan using hypo_id_val_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) The query plan is using the `hypo_id_val_idx` index. Use `hypopg_hide_index(oid)` to hide one of the indexes: rjuju=# SELECT hypopg_hide_index('hypo_id_val_idx'::REGCLASS); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------- Index Scan using hypo_id_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) The query plan is using the other index `hypo_id_idx` now. Use `hypopg_hide_index(oid)` to hide it: rjuju=# SELECT hypopg_hide_index('hypo_id_idx'::REGCLASS); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------- Seq Scan on hypo (cost=0.00..180.00 rows=1 width=13) Filter: (id = 1) (2 rows) And now the query plan changes back to `Seq Scan`. Use `hypopg_unhide_index(oid)` to restore index: rjuju=# SELECT hypopg_unhide_index('hypo_id_idx'::regclass); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------- Index Scan using hypo_id_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) Of course, you can also hide hypothetical indexes: rjuju=# SELECT hypopg_create_index('CREATE INDEX ON hypo(id)'); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------------------ Index Scan using "<12659>btree_hypo_id" on hypo (cost=0.04..8.05 rows=1 width=13) Index Cond: (id = 1) (2 rows) rjuju=# SELECT hypopg_hide_index(12659); rjuju=# EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------- Seq Scan on hypo (cost=0.00..180.00 rows=1 width=13) Filter: (id = 1) (2 rows) You can check which indexes are hidden using `hypopg_hidden_indexes()` or the `hypopg_hidden_indexes` view: rjuju=# SELECT * FROM hypopg_hidden_indexes(); indexid --------- 526604 526603 12659 (3 rows) rjuju=# SELECT * FROM hypopg_hidden_indexes; indexrelid | index_name | schema_name | table_name | am_name | is_hypo ------------+----------------------+-------------+------------+---------+--------- 12659 | <12659>btree_hypo_id | public | hypo | btree | t 526603 | hypo_id_idx | public | hypo | btree | f 526604 | hypo_id_val_idx | public | hypo | btree | f (3 rows) To restore all existing indexes, you can use the function `hypopg_unhide_all_indexes()`. Note that the functionality to hide existing indexes only applies to the EXPLAIN command in the current session and will not affect other sessions. hypopg-1.4.0/TODO.md000066400000000000000000000015341443433066400141230ustar00rootroot00000000000000TODO ==== Important --------- - [X] Choose a better naming convention, including the index oid - [X] handle multiple columns - [X] handle collation - [ ] handle GIN access method - [ ] handle GiST access method - [ ] handle SP-GiST access method - [X] handle BRIN access method - [X] better formula for number of pages in index - [ ] handle tree height - [X] Add check for btree: total column size must not exceed BTMaxItemSize (maybe less, just in case?) - Add some more (or enhance) function. Following are interesting: - [X] estimated index size - [ ] estimated number of lines - [X] add hypopg_get_indexdef(oid) Less important -------------- - [ ] specify tablespace - [ ] Compatibility PG 9.2- - [X] handle unique index - [X] handle reverse and nulls first - [X] handle index on expression - [X] handle index on predicate - [ ] specify a bloat factor hypopg-1.4.0/debian/000077500000000000000000000000001443433066400142535ustar00rootroot00000000000000hypopg-1.4.0/debian/changelog000066400000000000000000000025751443433066400161360ustar00rootroot00000000000000hypopg (1.4.0-1) unstable; urgency=medium * New upstream version. -- Julien Rouhaud Sat, 27 May 2021 15:26:37 +0800 hypopg (1.3.1-1) unstable; urgency=medium * New upstream version. * Fix github watch file. * Add myself to Uploaders. -- Christoph Berg Fri, 08 Oct 2021 10:40:57 +0200 hypopg (1.2.0-1) unstable; urgency=medium * New upstream version. -- Julien Rouhaud Fri, 26 Feb 2021 14:51:06 +0800 hypopg (1.1.4-2) unstable; urgency=medium * Team upload for PostgreSQL 13. * Use source format 3.0 (quilt). * Use dh --with pgxs_loop. * DH 13. * R³: no. * debian/tests: Use 'make' instead of postgresql-server-dev-all. -- Christoph Berg Sun, 18 Oct 2020 22:16:44 +0200 hypopg (1.1.4-1) unstable; urgency=medium * New upstream version with PG13 support. -- Julien Rouhaud Wed, 24 Jun 2020 11:18:47 +0000 hypopg (1.1.3-1) unstable; urgency=low * New upstream version with PG12 support. -- Julien Rouhaud Sun, 16 Jun 2019 10:47:38 +0100 hypopg (1.1.2-1) unstable; urgency=low * New upstream version with PG11 support. -- Julien Rouhaud Wed, 30 May 2018 04:50:12 +0100 hypopg (1.1.1-1) unstable; urgency=low * Initial release. -- Julien Rouhaud Sat, 24 Mar 2018 10:27:33 +0100 hypopg-1.4.0/debian/control000066400000000000000000000016321443433066400156600ustar00rootroot00000000000000Source: hypopg Section: database Priority: optional Maintainer: Julien Rouhaud Uploaders: Christoph Berg , Standards-Version: 4.6.0 Rules-Requires-Root: no Build-Depends: debhelper-compat (= 13), postgresql-all (>= 217~) Homepage: https://hypopg.readthedocs.io/ Vcs-Browser: https://github.com/HypoPG/hypopg Vcs-Git: https://github.com/HypoPG/hypopg.git Package: postgresql-14-hypopg Architecture: any Depends: ${misc:Depends}, ${shlibs:Depends}, postgresql-14 Description: PostgreSQL extension adding support for hypothetical indexes. An hypothetical, or virtual, index is an index that doesn't really exists, and thus doesn't cost CPU, disk or any resource to create. They're useful to know if specific indexes can increase performance for problematic queries, since you can know if PostgreSQL will use these indexes or not without having to spend resources to create them. hypopg-1.4.0/debian/control.in000066400000000000000000000016501443433066400162650ustar00rootroot00000000000000Source: hypopg Section: database Priority: optional Maintainer: Julien Rouhaud Uploaders: Christoph Berg , Standards-Version: 4.6.0 Rules-Requires-Root: no Build-Depends: debhelper-compat (= 13), postgresql-all (>= 217~) Homepage: https://hypopg.readthedocs.io/ Vcs-Browser: https://github.com/HypoPG/hypopg Vcs-Git: https://github.com/HypoPG/hypopg.git Package: postgresql-PGVERSION-hypopg Architecture: any Depends: ${misc:Depends}, ${shlibs:Depends}, postgresql-PGVERSION Description: PostgreSQL extension adding support for hypothetical indexes. An hypothetical, or virtual, index is an index that doesn't really exists, and thus doesn't cost CPU, disk or any resource to create. They're useful to know if specific indexes can increase performance for problematic queries, since you can know if PostgreSQL will use these indexes or not without having to spend resources to create them. hypopg-1.4.0/debian/copyright000066400000000000000000000024131443433066400162060ustar00rootroot00000000000000Portions Copyright (c) 2015-2023, PostgreSQL GLobal Development Group Portions Copyright (c) 1994, The Regents of the University of California Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. Contributors to HypoPG: * Julien Rouhaud * Thom brown * Ronan Dunklau * Мосолов Константин * Andrew Kane * Rob Stolarz * Jeremy Finzel * Christoph Berg hypopg-1.4.0/debian/pgversions000066400000000000000000000000051443433066400163700ustar00rootroot000000000000009.2+ hypopg-1.4.0/debian/rules000077500000000000000000000007331443433066400153360ustar00rootroot00000000000000#!/usr/bin/make -f PKGVER = $(shell dpkg-parsechangelog | awk -F '[:-]' '/^Version:/ { print substr($$2, 2) }') EXCLUDE = --exclude-vcs --exclude=debian override_dh_installdocs: dh_installdocs --all CONTRIBUTORS.md README.md rm -rvf debian/*/usr/share/doc/postgresql-doc-* override_dh_installchangelogs: dh_installchangelogs CHANGELOG.md orig: debian/control clean cd .. && tar czf hypopg_$(PKGVER).orig.tar.gz $(EXCLUDE) hypopg-$(PKGVER) %: dh $@ --with pgxs_loop hypopg-1.4.0/debian/source/000077500000000000000000000000001443433066400155535ustar00rootroot00000000000000hypopg-1.4.0/debian/source/format000066400000000000000000000000141443433066400167610ustar00rootroot000000000000003.0 (quilt) hypopg-1.4.0/debian/tests/000077500000000000000000000000001443433066400154155ustar00rootroot00000000000000hypopg-1.4.0/debian/tests/control000066400000000000000000000001271443433066400170200ustar00rootroot00000000000000Depends: @, make, postgresql-contrib-14 Tests: installcheck Restrictions: allow-stderr hypopg-1.4.0/debian/tests/control.in000066400000000000000000000001361443433066400174250ustar00rootroot00000000000000Depends: @, make, postgresql-contrib-PGVERSION Tests: installcheck Restrictions: allow-stderr hypopg-1.4.0/debian/tests/installcheck000077500000000000000000000000551443433066400200070ustar00rootroot00000000000000#!/bin/sh set -eu pg_buildext installcheck hypopg-1.4.0/debian/watch000066400000000000000000000001031443433066400152760ustar00rootroot00000000000000version=4 https://github.com/hypopg/hypopg/releases .*/(.*).tar.gz hypopg-1.4.0/docs/000077500000000000000000000000001443433066400137615ustar00rootroot00000000000000hypopg-1.4.0/docs/.gitignore000066400000000000000000000000071443433066400157460ustar00rootroot00000000000000_build hypopg-1.4.0/docs/Makefile000066400000000000000000000011361443433066400154220ustar00rootroot00000000000000# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = python -msphinx SPHINXPROJ = HypoPG SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)hypopg-1.4.0/docs/conf.py000066400000000000000000000140261443433066400152630ustar00rootroot00000000000000#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # HypoPG documentation build configuration file, created by # sphinx-quickstart on Fri Mar 16 22:25:47 2018. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os # import sys # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.imgmath', 'sphinx.ext.ifconfig'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = 'HypoPG' copyright = '2015-2023, Julien Rouhaud' author = 'Julien Rouhaud' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '' # The full version, including alpha/beta/rc tags. release = '' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = True # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # #html_theme = 'alabaster' # on_rtd is whether we are on readthedocs.io, this line of code grabbed from # docs.readthedocs.io on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if not on_rtd: # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # otherwise, readthedocs.io uses their theme by default, so no need to specify it # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Custom sidebar templates, must be a dictionary that maps document names # to template names. # # This is required for the alabaster theme # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars html_sidebars = { '**': [ 'about.html', 'navigation.html', 'relations.html', # needs 'show_related': True theme option to display 'searchbox.html', 'donate.html', ] } # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'HypoPGdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'HypoPG.tex', 'HypoPG Documentation', 'Julien Rouhaud', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'hypopg', 'HypoPG Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'HypoPG', 'HypoPG Documentation', author, 'HypoPG', 'One line description of project.', 'Miscellaneous'), ] # -- Options for Epub output ---------------------------------------------- # Bibliographic Dublin Core info. epub_title = project epub_author = author epub_publisher = author epub_copyright = copyright # The unique identifier of the text. This can be a ISBN number # or the project homepage. # # epub_identifier = '' # A unique identification for the text. # # epub_uid = '' # A list of files that should not be packed into the epub file. epub_exclude_files = ['search.html'] hypopg-1.4.0/docs/contributing.rst000066400000000000000000000011661443433066400172260ustar00rootroot00000000000000Contributing ============ HypoPG is an open source project, distributed under the `PostgreSQL licence `_. Talk ---- If you have suggestions, feature request or just want to say hi you can join the **#hypopg** IRC channel on freenode. .. _bug_reports: Bug reports ----------- If you've found a bug, please report it on the `HypoPG bug-tracker on Github `_. Hacking ------- If you want to fix a bug, enhance the documentation or develop new features, feel free to clone the `git repository on Github `_. hypopg-1.4.0/docs/hypothetical_indexes.rst000066400000000000000000000006611443433066400207320ustar00rootroot00000000000000.. _hypothetical_indexes: Hypothetical Indexes ==================== A hypothetical, or virtual, index is an index that does not really exist, and therefore does not cost CPU, disk or any resource to create. They are useful to find out whether specific indexes can increase the performance for problematic queries, since you can discover if PostgreSQL will use these indexes or not without having to spend resources to create them. hypopg-1.4.0/docs/index.rst000066400000000000000000000011461443433066400156240ustar00rootroot00000000000000.. title:: HypoPG: Hypothetical indexes for PostgreSQL HypoPG ====== `HypoPG `_ is a `PostgreSQL `_ extension, adding support for :ref:`hypothetical_indexes`. It's compatible with **PostgreSQL 9.2 and above**. .. note:: This documentation is a work in progress. If you're looking for something and can't find it here, please `report an issue `_ so I can enhance the documentation. .. toctree:: :maxdepth: 2 :caption: Contents: hypothetical_indexes installation usage contributing hypopg-1.4.0/docs/installation.rst000066400000000000000000000064421443433066400172220ustar00rootroot00000000000000.. _installation: Installation ============ Requirements ------------ - PostgreSQL 9.2+ Packages -------- Hypopg is available as a package on some GNU/Linux distributions: - RHEL/Rocky Linux HypoPG is available as a package using `the PGDG packages `_. Once the PGDG repository is setup, you just need to install the package. As root: .. code-block:: bash yum install hypopg - Debian / Ubuntu HypoPG is available as a package using `the PGDG packages `_. Once the PGDG repository is setup, you just need to install the package. As root: .. code-block:: bash apt install postgresql-XY-hypopg where XY is the major version for which you want to install hypopg. - Archlinux Hypopg is available on the `AUR repository `_. If you have **yaourt** setup, you can simply install the `hypopg-git` package with the following command: .. code-block:: bash yaourt -S hypopg-git Otherwise, look at the `official documentation `_ to manually install the package. .. note:: Installing this package will use the current development version. If you want to install a specific version, please see the :ref:`install_from_source` section. .. _install_from_source: Installation from sources ------------------------- To install HypoPG from sources, you need the following extra requirements: - PostgreSQL development packages .. note:: On Debian/Ubuntu systems, the development packages are named `postgresql-server-dev-X`, X being the major version. On RHEL/Centos systems, the development packages are named `postgresqlX-devel`, X being the major version. - A C compiler and `make` - `unzip` - optionally the `wget` tool - a user with `sudo` privilege, or a root access .. note:: If you don't have `sudo` or if you user isn't authorized to issue command as root, you should do all the following commands as **root**. First, you need to download HypoPG source code. If you want the development version, you can download it `from here `_, or via command line: .. code-block:: bash wget https://github.com/HypoPG/hypopg/archive/master.zip If you want a specific version, you can chose `the version you want here `_ and follow the related download link. For instance, if you want to install the version 1.0.0, you can download it from the command line with the following command: .. code-block:: bash wget https://github.com/HypoPG/hypopg/archive/1.0.0.zip Then, you need to extract the downloaded archive with `unzip` and go to the extracted directory. For instance, if you downloaded the latest development version: .. code-block:: bash unzip master.zip cd hypopg-master You can now compile and install HypoPG. Simply run: .. code-block:: bash make sudo make install .. note:: If you were doing these commands as **root**, you don't need to use sudo. The last command should therefore be: .. code-block:: bash make install If no errors occured, HypoPG is now available! If you need help on how to use it, please refer to the :ref:`usage` section. hypopg-1.4.0/docs/make.bat000066400000000000000000000014441443433066400153710ustar00rootroot00000000000000@ECHO OFF pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=python -msphinx ) set SOURCEDIR=. set BUILDDIR=_build set SPHINXPROJ=HypoPG if "%1" == "" goto help %SPHINXBUILD% >NUL 2>NUL if errorlevel 9009 ( echo. echo.The Sphinx module was not found. Make sure you have Sphinx installed, echo.then set the SPHINXBUILD environment variable to point to the full echo.path of the 'sphinx-build' executable. Alternatively you may add the echo.Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% goto end :help %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% :end popd hypopg-1.4.0/docs/requirements.txt000066400000000000000000000000301443433066400172360ustar00rootroot00000000000000sphinx sphinx_rtd_theme hypopg-1.4.0/docs/usage.rst000066400000000000000000000324761443433066400156330ustar00rootroot00000000000000.. _usage: Usage ===== Introduction ------------ HypoPG is useful if you want to check if some index would help one or multiple queries. Therefore, you should already know what are the queries you need to optimize, and ideas on which indexes you want to try. Also, the hypothetical indexes that HypoPG will create are not stored in any catalog, but in your connection private memory. Therefore, it won't bloat any table and won't impact any concurrent connection. Also, since the hypothetical indexes doesn't really exists, HypoPG makes sure they will only be used using a simple EXPLAIN statement (without the ANALYZE option). Install the extension --------------------- As any other extension, you have to install it on all the databases where you want to be able to use it. This is simply done executing the following query, connected on the database you want to install HypoPG with a user having enough privileges: .. code-block:: psql CREATE EXTENSION hypopg ; HypoPG is now available. You can check easily if the extension is present using `psql `_: .. code-block:: psql :emphasize-lines: 5 \dx List of installed extensions Name | Version | Schema | Description ---------+---------+------------+------------------------------------- hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (2 rows) As you can see, hypopg version 1.1.0 is installed. If you need to check using plain SQL, please refer to the `pg_extension table documentation `_. Configuration ------------- The following configuration parameters (GUCs) are available, and can be changed interactively: hypopg.enabled: Default to ``on``. Use this parameter to globally enable or disable HypoPG. When HypoPG is disabled, no hypothetical index will be used, but the defined hypothetical indexes won't be removed. hypopg.use_real_oids: Default to ``off``. By default, HypoPG won't use "real" object identifiers, but instead borrow ones from the ~ 14000 / 16384 (respectively the lowest unused oid less then FirstNormalObjectId and FirstNormalObjectId) range, which are reserved by PostgreSQL for future usage in future releases. This doesn't cause any problem, as the free range is dynamically computed the first time a connection uses HypoPG, and has the advantage to work on a standby server. But the drawback is that you can't have more than approximately 2500 hypothetical indexes at the same time, and creating a new hypothetical index will become very slow once more than the maximum number of objects has been created until ``hypopg_reset()`` is called. If those drawbacks are problematic, you can enable this parameter. HypoPG will then ask for a real object identifier, which will need to obtain more locks and won't work on a standby, but will allow to use the full range of object identifiers. Note that switching this parameter doesn't require to reset the entries, both can coexist at the same time. Supported access methods ------------------------ The following access methods are supported: - btree - brin - hash (requires PostgreSQL 10 or above) - bloom (requires the bloom extension to be installed) Create a hypothetical index --------------------------- .. note:: Using HypoPG require some knowledge on the **EXPLAIN** command. If you need more information about this command, you can check `the official documentation `_. There are also a lot of very good resources available. For clarity, let's see how it works with a very simple test case: .. code-block:: psql CREATE TABLE hypo (id integer, val text) ; INSERT INTO hypo SELECT i, 'line ' || i FROM generate_series(1, 100000) i ; VACUUM ANALYZE hypo ; This table doesn't have any index. Let's assume we want to check if an index would help a simple query. First, let's see how it behaves: .. code-block:: psql EXPLAIN SELECT val FROM hypo WHERE id = 1; QUERY PLAN -------------------------------------------------------- Seq Scan on hypo (cost=0.00..1791.00 rows=1 width=14) Filter: (id = 1) (2 rows) A plain sequential scan is used, since no index exists on the table. A simple btree index on the **id** column should help this query. Let's check with HypoPG. The function **hypopg_create_index()** will accept any standard **CREATE INDEX** statement(s) (any other statement passed to this function will be ignored), and create a hypothetical index for each: .. code-block:: psql SELECT * FROM hypopg_create_index('CREATE INDEX ON hypo (id)') ; indexrelid | indexname ------------+---------------------- 18284 | <18284>btree_hypo_id (1 row) The function returns two columns: - the object identifier of the hypothetical index - the generated hypothetical index name We can run the EXPLAIN again to see if PostgreSQL would use this index: .. code-block:: psql :emphasize-lines: 4 EXPLAIN SELECT val FROM hypo WHERE id = 1; QUERY PLAN ---------------------------------------------------------------------------------- Index Scan using <18284>btree_hypo_id on hypo (cost=0.04..8.06 rows=1 width=10) Index Cond: (id = 1) (2 rows) Yes, PostgreSQL would use such an index. Just to be sure, let's check that the hypothetical index won't be used to acually run the query: .. code-block:: psql EXPLAIN ANALYZE SELECT val FROM hypo WHERE id = 1; QUERY PLAN --------------------------------------------------------------------------------------------------- Seq Scan on hypo (cost=0.00..1791.00 rows=1 width=10) (actual time=0.046..46.390 rows=1 loops=1) Filter: (id = 1) Rows Removed by Filter: 99999 Planning time: 0.160 ms Execution time: 46.460 ms (5 rows) That's all you need to create hypothetical indexes and see if PostgreSQL would use such indexes. Manipulate hypothetical indexes ------------------------------- Some other convenience functions and views are available: - **hypopg_list_indexes**: view that lists all hypothetical indexes that have been created .. code-block:: psql SELECT * FROM hypopg_list_indexes ; indexrelid | indexname | nspname | relname | amname ------------+----------------------+---------+---------+-------- 18284 | <18284>btree_hypo_id | public | hypo | btree (1 row) - **hypopg()**: function that lists all hypothetical indexes that have been created with the same format as **pg_index** .. code-block:: psql SELECT * FROM hypopg() ; indexname | indexrelid | indrelid | innatts | indisunique | indkey | indcollation | indclass | indoption | indexprs | indpred | amid ----------------------+------------+----------+---------+-------------+--------+--------------+----------+-----------+----------+---------+------ <18284>btree_hypo_id | 13543 | 18122 | 1 | f | 1 | 0 | 1978 | | | | 403 (1 row) - **hypopg_get_indexdef(oid)**: function that lists the CREATE INDEX statement that would recreate a stored hypothetical index .. code-block:: psql SELECT indexname, hypopg_get_indexdef(indexrelid) FROM hypopg_list_indexes ; indexname | hypopg_get_indexdef ----------------------+---------------------------------------------- <18284>btree_hypo_id | CREATE INDEX ON public.hypo USING btree (id) (1 row) - **hypopg_relation_size(oid)**: function that estimates how big a hypothetical index would be: .. code-block:: psql SELECT indexname, pg_size_pretty(hypopg_relation_size(indexrelid)) FROM hypopg_list_indexes ; indexname | pg_size_pretty ----------------------+---------------- <18284>btree_hypo_id | 2544 kB (1 row) - **hypopg_drop_index(oid)**: function that removes the given hypothetical index - **hypopg_reset()**: function that removes all hypothetical indexes Hypothetically hide existing indexes ------------------------------------ You can hide both existing and hypothetical indexes hypothetically. If you want to test it as described in the documentation, you should first use **hypopg_reset()** to clear the effects of any other hypothetical indexes. As a simple case, let's consider two indexes: .. code-block:: psql SELECT hypopg_reset(); CREATE INDEX ON hypo(id); CREATE INDEX ON hypo(id, val); .. code-block:: psql :emphasize-lines: 4 EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ---------------------------------------------------------------------------------- Index Only Scan using hypo_id_val_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) The query plan is using the **hypo_id_val_idx** index now. - **hypopg_hide_index(oid)**: function that allows you to hide an index in the EXPLAIN output by using its OID. It returns `true` if the index was successfully hidden, and `false` otherwise. .. code-block:: psql :emphasize-lines: 10 SELECT hypopg_hide_index('hypo_id_val_idx'::REGCLASS); hypopg_hide_index ------------------- t (1 row) EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------- Index Scan using hypo_id_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) As an example, let's assume that the query plan is currently using the **hypo_id_val_idx** index. To continue testing, use the **hypopg_hide_index(oid)** function to hide another index. .. code-block:: psql :emphasize-lines: 10 SELECT hypopg_hide_index('hypo_id_idx'::REGCLASS); hypopg_hide_index ------------------- t (1 row) EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------- Seq Scan on hypo (cost=0.00..180.00 rows=1 width=13) Filter: (id = 1) (2 rows) - **hypopg_unhide_index(oid)**: function that restore a previously hidden index in the EXPLAIN output by using its OID. It returns `true` if the index was successfully restored, and `false` otherwise. .. code-block:: psql :emphasize-lines: 10 SELECT hypopg_unhide_index('hypo_id_idx'::regclass); hypopg_unhide_index ------------------- t (1 row) EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------- Index Scan using hypo_id_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) - **hypopg_unhide_all_index()**: function that restore all hidden indexes and returns void. - **hypopg_hidden_indexes()**: function that returns a list of OIDs for all hidden indexes. .. code-block:: psql SELECT * FROM hypopg_hidden_indexes(); indexid --------- 526604 (1 rows) - **hypopg_hidden_indexes**: view that returns a formatted list of all hidden indexes. .. code-block:: psql SELECT * FROM hypopg_hidden_indexes; indexrelid | index_name | schema_name | table_name | am_name | is_hypo -------------+----------------------+-------------+------------+---------+--------- 526604 | hypo_id_val_idx | public | hypo | btree | f (1 rows) .. note:: Hypothetical indexes can be hidden as well. .. code-block:: psql :emphasize-lines: 10 SELECT hypopg_create_index('CREATE INDEX ON hypo(id)'); hypopg_create_index ------------------------------ (12659,<12659>btree_hypo_id) (1 row) EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------------------ Index Scan using "<12659>btree_hypo_id" on hypo (cost=0.04..8.05 rows=1 width=13) Index Cond: (id = 1) (2 rows) Now that the hypothetical index is being used, we can try hiding it to see the change: .. code-block:: psql :emphasize-lines: 10 SELECT hypopg_hide_index(12659); hypopg_hide_index ------------------- t (1 row) EXPLAIN SELECT * FROM hypo WHERE id = 1; QUERY PLAN ------------------------------------------------------------------------- Index Scan using hypo_id_idx on hypo (cost=0.29..8.30 rows=1 width=13) Index Cond: (id = 1) (2 rows) SELECT * FROM hypopg_hidden_indexes; indexrelid | index_name | schema_name | table_name | am_name | is_hypo -------------+----------------------+-------------+------------+---------+--------- 12659 | <12659>btree_hypo_id | public | hypo | btree | t 526604 | hypo_id_val_idx | public | hypo | btree | f (2 rows) .. note:: If a hypothetical index has been hidden, it will be automatically unhidden when it is deleted using **hypopg_drop_index(oid)** or **hypopg_reset()**. .. code-block:: psql SELECT hypopg_drop_index(12659); SELECT * FROM hypopg_hidden_indexes; indexrelid | index_name | schema_name | table_name | am_name | is_hypo -------------+----------------------+-------------+------------+---------+--------- 526604 | hypo_id_val_idx | public | hypo | btree | f (2 rows) hypopg-1.4.0/expected/000077500000000000000000000000001443433066400146325ustar00rootroot00000000000000hypopg-1.4.0/expected/hypo_brin.out000066400000000000000000000007741443433066400173640ustar00rootroot00000000000000-- Hypothetical BRIN index tests CREATE TABLE hypo_brin (id integer); INSERT INTO hypo_brin SELECT generate_series(1, 10000); ANALYZE hypo_brin; SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo_brin USING brin (id);'); nb ---- 1 (1 row) -- Should use hypothetical index SET enable_seqscan = 0; SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo_brin WHERE id = 1') e WHERE e ~ 'Bitmap Index Scan.*<\d+>brin_hypo_brin.*'; count ------- 1 (1 row) DROP TABLE hypo_brin; hypopg-1.4.0/expected/hypo_hash.out000066400000000000000000000013061443433066400173450ustar00rootroot00000000000000-- hypothetical hash indexes, pg10+ -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- Create normal index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo USING hash (id)'); nb ---- 1 (1 row) -- Should use hypothetical index using a regular Index Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Scan.*<\d+>hash_hypo.*'; count ------- 1 (1 row) -- Deparse the index DDL SELECT hypopg_get_indexdef(indexrelid) FROM hypopg(); hypopg_get_indexdef --------------------------------------------- CREATE INDEX ON public.hypo USING hash (id) (1 row) hypopg-1.4.0/expected/hypo_hide_index.out000066400000000000000000000123541443433066400205270ustar00rootroot00000000000000-- Hypothetically hiding existing indexes tests -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- The EXPLAIN initial state SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; count ------- 0 (1 row) -- Create real index in hypo and use this index CREATE INDEX hypo_id_idx ON hypo(id); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; count ------- 1 (1 row) -- Should be zero SELECT COUNT(*) FROM hypopg_hidden_indexes(); count ------- 0 (1 row) -- The hypo_id_idx index should not be used SELECT hypopg_hide_index('hypo_id_idx'::regclass); hypopg_hide_index ------------------- t (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; count ------- 0 (1 row) -- Should be only one record SELECT COUNT(*) FROM hypopg_hidden_indexes(); count ------- 1 (1 row) SELECT table_name,index_name FROM hypopg_hidden_indexes; table_name | index_name ------------+------------- hypo | hypo_id_idx (1 row) -- Create the real index again and -- EXPLAIN should use this index instead of the previous one CREATE index hypo_id_val_idx ON hypo(id, val); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_val_idx'; count ------- 1 (1 row) -- Shouldn't use any index SELECT hypopg_hide_index('hypo_id_val_idx'::regclass); hypopg_hide_index ------------------- t (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_val_idx'; count ------- 0 (1 row) -- Should be two records SELECT table_name,index_name FROM hypopg_hidden_indexes; table_name | index_name ------------+----------------- hypo | hypo_id_idx hypo | hypo_id_val_idx (2 rows) -- Try to add one repeatedly or add another wrong index oid SELECT hypopg_hide_index('hypo_id_idx'::regclass); hypopg_hide_index ------------------- f (1 row) SELECT hypopg_hide_index('hypo'::regclass); hypopg_hide_index ------------------- f (1 row) SELECT hypopg_hide_index(0); hypopg_hide_index ------------------- f (1 row) -- Also of course can be used to hide hypothetical indexes SELECT COUNT(*) FROM hypopg_create_index('create index on hypo(id,val);'); count ------- 1 (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) SELECT hypopg_hide_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); hypopg_hide_index ------------------- t (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 0 (1 row) -- Should be only three records SELECT COUNT(*) FROM hypopg_hidden_indexes; count ------- 3 (1 row) -- Hypothetical indexes should be unhidden when deleting SELECT hypopg_drop_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); hypopg_drop_index ------------------- t (1 row) -- Should become two records SELECT COUNT(*) FROM hypopg_hidden_indexes; count ------- 2 (1 row) -- Hypopg_reset can also unhidden the hidden indexes -- due to the deletion of hypothetical indexes. SELECT COUNT(*) FROM hypopg_create_index('create index on hypo(id,val);'); count ------- 1 (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) SELECT hypopg_hide_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); hypopg_hide_index ------------------- t (1 row) -- Changed from three records to two records. SELECT COUNT(*) FROM hypopg_hidden_indexes; count ------- 3 (1 row) SELECT hypopg_reset(); hypopg_reset -------------- (1 row) SELECT COUNT(*) FROM hypopg_hidden_indexes; count ------- 2 (1 row) -- Unhide an index SELECT hypopg_unhide_index('hypo_id_idx'::regclass); hypopg_unhide_index --------------------- t (1 row) SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; count ------- 1 (1 row) -- Should become one record SELECT table_name,index_name FROM hypopg_hidden_indexes; table_name | index_name ------------+----------------- hypo | hypo_id_val_idx (1 row) -- Try to delete one repeatedly or delete another wrong index oid SELECT hypopg_unhide_index('hypo_id_idx'::regclass); hypopg_unhide_index --------------------- f (1 row) SELECT hypopg_unhide_index('hypo'::regclass); hypopg_unhide_index --------------------- f (1 row) SELECT hypopg_unhide_index(0); hypopg_unhide_index --------------------- f (1 row) -- Should still have one record SELECT table_name,index_name FROM hypopg_hidden_indexes; table_name | index_name ------------+----------------- hypo | hypo_id_val_idx (1 row) -- Unhide all indexes SELECT hypopg_unhide_all_indexes(); hypopg_unhide_all_indexes --------------------------- (1 row) -- Should change back to the original zero SELECT COUNT(*) FROM hypopg_hidden_indexes(); count ------- 0 (1 row) -- Clean real indexes and hypothetical indexes DROP INDEX hypo_id_idx; DROP INDEX hypo_id_val_idx; SELECT hypopg_reset(); hypopg_reset -------------- (1 row) hypopg-1.4.0/expected/hypo_include.out000066400000000000000000000024531443433066400200510ustar00rootroot00000000000000-- hypothetical indexes using INCLUDE keyword, pg11+ -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- Make sure stats and visibility map are up to date VACUUM ANALYZE hypo; -- Should not use hypothetical index -- Create normal index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo (id)'); nb ---- 1 (1 row) -- Should use hypothetical index using a regular Index Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Scan.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Remove all the hypothetical indexes SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- Create INCLUDE index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo (id) INCLUDE (val)'); nb ---- 1 (1 row) -- Should use hypothetical index using an Index Only Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Only Scan.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Deparse the index DDL SELECT hypopg_get_indexdef(indexrelid) FROM hypopg(); hypopg_get_indexdef ------------------------------------------------------------ CREATE INDEX ON public.hypo USING btree (id) INCLUDE (val) (1 row) hypopg-1.4.0/expected/hypo_index_part.out000066400000000000000000000030741443433066400205630ustar00rootroot00000000000000-- Hypothetical on partitioned tables CREATE TABLE hypo_part(id1 integer, id2 integer, id3 integer) PARTITION BY LIST (id1); CREATE TABLE hypo_part_1 PARTITION OF hypo_part FOR VALUES IN (1) PARTITION BY LIST (id2); CREATE TABLE hypo_part_1_1 PARTITION OF hypo_part_1 FOR VALUES IN (1); INSERT INTO hypo_part SELECT 1, 1, generate_series(1, 10000); ANALYZE hypo_part; SET enable_seqscan = 0; -- hypothetical index on root partitioned table should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part (id3)'); nb ---- 1 (1 row) SELECT 1, COUNT(*) FROM do_explain('SELECT * FROM hypo_part WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; ?column? | count ----------+------- 1 | 1 (1 row) SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- hypothetical index on non-root partitioned table should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1 (id3)'); nb ---- 1 (1 row) SELECT 2, COUNT(*) FROM do_explain('SELECT * FROM hypo_part_1 WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; ?column? | count ----------+------- 2 | 1 (1 row) SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- hypothetical index on partition should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1_1 (id3)'); nb ---- 1 (1 row) SELECT 3, COUNT(*) FROM do_explain('SELECT * FROM hypo_part_1_1 WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; ?column? | count ----------+------- 3 | 1 (1 row) hypopg-1.4.0/expected/hypo_index_part_10.out000066400000000000000000000022331443433066400210570ustar00rootroot00000000000000-- Hypothetical on partitioned tables CREATE TABLE hypo_part(id1 integer, id2 integer, id3 integer) PARTITION BY LIST (id1); CREATE TABLE hypo_part_1 PARTITION OF hypo_part FOR VALUES IN (1) PARTITION BY LIST (id2); CREATE TABLE hypo_part_1_1 PARTITION OF hypo_part_1 FOR VALUES IN (1); INSERT INTO hypo_part SELECT 1, 1, generate_series(1, 10000); ANALYZE hypo_part; -- hypothetical index on root partitioned table should not work SELECT hypopg_create_index('CREATE INDEX ON hypo_part (id1)'); ERROR: hypopg: cannot create hypothetical index on partitioned table "hypo_part" -- hypothetical index on non-root partitioned table should not work SELECT hypopg_create_index('CREATE INDEX ON hypo_part_1 (id1)'); ERROR: hypopg: cannot create hypothetical index on partitioned table "hypo_part_1" -- hypothetical index on partition should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1_1 (id3)'); nb ---- 1 (1 row) -- Should use hypothetical index SET enable_seqscan = 0; SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo_part WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part_1_1.*'; count ------- 1 (1 row) hypopg-1.4.0/expected/hypopg.out000066400000000000000000000120611443433066400166710ustar00rootroot00000000000000-- SETUP CREATE OR REPLACE FUNCTION do_explain(stmt text) RETURNS table(a text) AS $_$ DECLARE ret text; BEGIN FOR ret IN EXECUTE format('EXPLAIN (FORMAT text) %s', stmt) LOOP a := ret; RETURN next ; END LOOP; END; $_$ LANGUAGE plpgsql; CREATE EXTENSION hypopg; CREATE TABLE hypo (id integer, val text); INSERT INTO hypo SELECT i, 'line ' || i FROM generate_series(1,100000) f(i); ANALYZE hypo; -- TESTS SELECT COUNT(*) AS nb FROM public.hypopg_create_index('SELECT 1;CREATE INDEX ON hypo(id); SELECT 2'); WARNING: hypopg: SQL order #1 is not a CREATE INDEX statement WARNING: hypopg: SQL order #3 is not a CREATE INDEX statement nb ---- 1 (1 row) SELECT schema_name, table_name, am_name FROM public.hypopg_list_indexes; schema_name | table_name | am_name -------------+------------+--------- public | hypo | btree (1 row) -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo ORDER BY id') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Should not use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 0 (1 row) -- Add predicate index SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WHERE id < 5'); nb ---- 1 (1 row) -- This specific index should be used WITH ind AS ( SELECT indexrelid, row_number() OVER (ORDER BY indexrelid) AS num FROM public.hypopg() ), regexp AS ( SELECT regexp_replace(e, '.*<(\d+)>.*', E'\\1', 'g') AS r FROM do_explain('SELECT * FROM hypo WHERE id < 3') AS e ) SELECT num FROM ind JOIN regexp ON ind.indexrelid::text = regexp.r; num ----- 2 (1 row) -- Specify fillfactor SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WITH (fillfactor = 10)'); nb ---- 1 (1 row) -- Specify an incorrect fillfactor SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WITH (fillfactor = 1)'); ERROR: value 1 out of bounds for option "fillfactor" DETAIL: Valid values are between "10" and "100". -- Index size estimation SELECT hypopg_relation_size(indexrelid) = current_setting('block_size')::bigint AS one_block FROM hypopg() ORDER BY indexrelid; one_block ----------- f t f (3 rows) -- Should detect invalid argument SELECT hypopg_relation_size(1); ERROR: oid 1 is not a hypothetical index -- locally disable hypoopg SET hypopg.enabled to false; -- no hypothetical index should be used SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 0 (1 row) -- locally re-enable hypoopg SET hypopg.enabled to true; -- hypothetical index should be used SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Remove one hypothetical index SELECT hypopg_drop_index(indexrelid) FROM hypopg() ORDER BY indexrelid LIMIT 1; hypopg_drop_index ------------------- t (1 row) -- Remove all the hypothetical indexes SELECT hypopg_reset(); hypopg_reset -------------- (1 row) -- index on expression SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo (md5(val))'); nb ---- 1 (1 row) -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE md5(val) = md5(''line 1'')') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) -- Deparse an index DDL, with almost every possible pathcode SELECT hypopg_get_indexdef(indexrelid) FROM hypopg_create_index('create index on hypo using btree(id desc, id desc nulls first, id desc nulls last, cast(md5(val) as bpchar) bpchar_pattern_ops) with (fillfactor = 10) WHERE id < 1000 AND id +1 %2 = 3'); hypopg_get_indexdef --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CREATE INDEX ON public.hypo USING btree (id DESC, id DESC, id DESC NULLS LAST, ((md5(val))::bpchar) bpchar_pattern_ops) WITH (fillfactor = 10) WHERE ((id < 1000) AND ((id + (1 % 2)) = 3)) (1 row) -- Make sure the old Oid generator still works. Test it while keeping existing -- entries, as both should be able to coexist. SET hypopg.use_real_oids = on; -- Should not use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 0 (1 row) SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo(id);'); nb ---- 1 (1 row) -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; count ------- 1 (1 row) hypopg-1.4.0/hypopg--1.3.1--1.4.0.sql000066400000000000000000000030651443433066400163120ustar00rootroot00000000000000-- This program is open source, licensed under the PostgreSQL License. -- For license terms, see the LICENSE file. -- -- Copyright (C) 2015-2023: Julien Rouhaud -- complain if script is sourced in psql, rather than via CREATE EXTENSION \echo Use "ALTER EXTENSION hypopg" to load this file. \quit CREATE FUNCTION hypopg_hide_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_hide_index'; CREATE FUNCTION hypopg_unhide_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_unhide_index'; CREATE FUNCTION hypopg_unhide_all_indexes() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_unhide_all_indexes'; CREATE FUNCTION hypopg_hidden_indexes() RETURNS TABLE (indexid oid) LANGUAGE C STRICT VOLATILE AS '$libdir/hypopg', 'hypopg_hidden_indexes'; CREATE VIEW hypopg_hidden_indexes AS SELECT h.indexid AS indexrelid, i.relname AS index_name, n.nspname AS schema_name, t.relname AS table_name, m.amname AS am_name, false AS is_hypo FROM hypopg_hidden_indexes() h JOIN pg_index x ON x.indexrelid = h.indexid JOIN pg_class i ON i.oid = h.indexid JOIN pg_namespace n ON n.oid = i.relnamespace JOIN pg_class t ON t.oid = x.indrelid JOIN pg_am m ON m.oid = i.relam UNION ALL SELECT hl.*, true AS is_hypo FROM hypopg_hidden_indexes() hi JOIN hypopg_list_indexes hl on hl.indexrelid = hi.indexid ORDER BY index_name;hypopg-1.4.0/hypopg--1.3.1.sql000066400000000000000000000041071443433066400155750ustar00rootroot00000000000000-- This program is open source, licensed under the PostgreSQL License. -- For license terms, see the LICENSE file. -- -- Copyright (C) 2015-2023: Julien Rouhaud -- complain if script is sourced in psql, rather than via CREATE EXTENSION \echo Use "CREATE EXTENSION hypopg" to load this file. \quit SET LOCAL client_encoding = 'UTF8'; CREATE FUNCTION hypopg_reset_index() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_reset_index'; CREATE FUNCTION hypopg_reset() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_reset'; CREATE FUNCTION hypopg_create_index(IN sql_order text, OUT indexrelid oid, OUT indexname text) RETURNS SETOF record LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_create_index'; CREATE FUNCTION hypopg_drop_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_drop_index'; CREATE FUNCTION hypopg(OUT indexname text, OUT indexrelid oid, OUT indrelid oid, OUT innatts integer, OUT indisunique boolean, OUT indkey int2vector, OUT indcollation oidvector, OUT indclass oidvector, OUT indoption oidvector, OUT indexprs pg_node_tree, OUT indpred pg_node_tree, OUT amid oid) RETURNS SETOF record LANGUAGE c COST 100 AS '$libdir/hypopg', 'hypopg'; CREATE VIEW hypopg_list_indexes AS SELECT h.indexrelid, h.indexname AS index_name, n.nspname AS schema_name, coalesce(c.relname, '') AS table_name, am.amname AS am_name FROM hypopg() h LEFT JOIN pg_catalog.pg_class c ON c.oid = h.indrelid LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_catalog.pg_am am ON am.oid = h.amid; CREATE FUNCTION hypopg_relation_size(IN indexid oid) RETURNS bigint LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_relation_size'; CREATE FUNCTION hypopg_get_indexdef(IN indexid oid) RETURNS text LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_get_indexdef'; hypopg-1.4.0/hypopg--1.4.0.sql000066400000000000000000000065231443433066400156010ustar00rootroot00000000000000-- This program is open source, licensed under the PostgreSQL License. -- For license terms, see the LICENSE file. -- -- Copyright (C) 2015-2023: Julien Rouhaud -- complain if script is sourced in psql, rather than via CREATE EXTENSION \echo Use "CREATE EXTENSION hypopg" to load this file. \quit SET LOCAL client_encoding = 'UTF8'; CREATE FUNCTION hypopg_reset_index() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_reset_index'; CREATE FUNCTION hypopg_reset() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_reset'; CREATE FUNCTION hypopg_create_index(IN sql_order text, OUT indexrelid oid, OUT indexname text) RETURNS SETOF record LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_create_index'; CREATE FUNCTION hypopg_drop_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_drop_index'; CREATE FUNCTION hypopg(OUT indexname text, OUT indexrelid oid, OUT indrelid oid, OUT innatts integer, OUT indisunique boolean, OUT indkey int2vector, OUT indcollation oidvector, OUT indclass oidvector, OUT indoption oidvector, OUT indexprs pg_node_tree, OUT indpred pg_node_tree, OUT amid oid) RETURNS SETOF record LANGUAGE c COST 100 AS '$libdir/hypopg', 'hypopg'; CREATE VIEW hypopg_list_indexes AS SELECT h.indexrelid, h.indexname AS index_name, n.nspname AS schema_name, coalesce(c.relname, '') AS table_name, am.amname AS am_name FROM hypopg() h LEFT JOIN pg_catalog.pg_class c ON c.oid = h.indrelid LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_catalog.pg_am am ON am.oid = h.amid; CREATE FUNCTION hypopg_relation_size(IN indexid oid) RETURNS bigint LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_relation_size'; CREATE FUNCTION hypopg_get_indexdef(IN indexid oid) RETURNS text LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_get_indexdef'; CREATE FUNCTION hypopg_hide_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_hide_index'; CREATE FUNCTION hypopg_unhide_index(IN indexid oid) RETURNS bool LANGUAGE C STRICT VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_unhide_index'; CREATE FUNCTION hypopg_unhide_all_indexes() RETURNS void LANGUAGE C VOLATILE COST 100 AS '$libdir/hypopg', 'hypopg_unhide_all_indexes'; CREATE FUNCTION hypopg_hidden_indexes() RETURNS TABLE (indexid oid) LANGUAGE C STRICT VOLATILE AS '$libdir/hypopg', 'hypopg_hidden_indexes'; CREATE VIEW hypopg_hidden_indexes AS SELECT h.indexid AS indexrelid, i.relname AS index_name, n.nspname AS schema_name, t.relname AS table_name, m.amname AS am_name, false AS is_hypo FROM hypopg_hidden_indexes() h JOIN pg_index x ON x.indexrelid = h.indexid JOIN pg_class i ON i.oid = h.indexid JOIN pg_namespace n ON n.oid = i.relnamespace JOIN pg_class t ON t.oid = x.indrelid JOIN pg_am m ON m.oid = i.relam UNION ALL SELECT hl.*, true AS is_hypo FROM hypopg_hidden_indexes() hi JOIN hypopg_list_indexes hl on hl.indexrelid = hi.indexid ORDER BY index_name;hypopg-1.4.0/hypopg.c000066400000000000000000000277531443433066400145210ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg.c: Implementation of hypothetical indexes for PostgreSQL * * Some functions are imported from PostgreSQL source code, theses are present * in hypopg_import.* files. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (C) 2015-2023: Julien Rouhaud * *------------------------------------------------------------------------- */ #include "postgres.h" #include "fmgr.h" #if PG_VERSION_NUM < 120000 #include "access/sysattr.h" #endif #include "access/transam.h" #if PG_VERSION_NUM < 140000 #include "catalog/indexing.h" #endif #if PG_VERSION_NUM >= 110000 #include "catalog/partition.h" #include "nodes/pg_list.h" #include "utils/lsyscache.h" #endif #include "executor/spi.h" #include "miscadmin.h" #include "utils/elog.h" #include "include/hypopg.h" #include "include/hypopg_import.h" #include "include/hypopg_index.h" PG_MODULE_MAGIC; /*--- Variables exported ---*/ bool isExplain; bool hypo_is_enabled; bool hypo_use_real_oids; MemoryContext HypoMemoryContext; /*--- Private variables ---*/ static Oid last_oid = InvalidOid; static Oid min_fake_oid = InvalidOid; static bool oid_wraparound = false; /*--- Functions --- */ PGDLLEXPORT void _PG_init(void); PGDLLEXPORT Datum hypopg_reset(PG_FUNCTION_ARGS); PG_FUNCTION_INFO_V1(hypopg_reset); static void hypo_utility_hook( #if PG_VERSION_NUM >= 100000 PlannedStmt *pstmt, #else Node *parsetree, #endif const char *queryString, #if PG_VERSION_NUM >= 140000 bool readOnlyTree, #endif #if PG_VERSION_NUM >= 90300 ProcessUtilityContext context, #endif ParamListInfo params, #if PG_VERSION_NUM >= 100000 QueryEnvironment *queryEnv, #endif #if PG_VERSION_NUM < 90300 bool isTopLevel, #endif DestReceiver *dest, #if PG_VERSION_NUM < 130000 char *completionTag #else QueryCompletion *qc #endif ); static ProcessUtility_hook_type prev_utility_hook = NULL; static void hypo_executorEnd_hook(QueryDesc *queryDesc); static ExecutorEnd_hook_type prev_ExecutorEnd_hook = NULL; static Oid hypo_get_min_fake_oid(void); static void hypo_get_relation_info_hook(PlannerInfo *root, Oid relationObjectId, bool inhparent, RelOptInfo *rel); static get_relation_info_hook_type prev_get_relation_info_hook = NULL; static bool hypo_index_match_table(hypoIndex *entry, Oid relid); static bool hypo_is_simple_explain(Node *node); void _PG_init(void) { /* Install hooks */ prev_utility_hook = ProcessUtility_hook; ProcessUtility_hook = hypo_utility_hook; prev_ExecutorEnd_hook = ExecutorEnd_hook; ExecutorEnd_hook = hypo_executorEnd_hook; prev_get_relation_info_hook = get_relation_info_hook; get_relation_info_hook = hypo_get_relation_info_hook; prev_explain_get_index_name_hook = explain_get_index_name_hook; explain_get_index_name_hook = hypo_explain_get_index_name_hook; isExplain = false; hypoIndexes = NIL; hypoHiddenIndexes = NIL; HypoMemoryContext = AllocSetContextCreate(TopMemoryContext, "HypoPG context", #if PG_VERSION_NUM >= 90600 ALLOCSET_DEFAULT_SIZES #else ALLOCSET_DEFAULT_MINSIZE, ALLOCSET_DEFAULT_INITSIZE, ALLOCSET_DEFAULT_MAXSIZE #endif ); DefineCustomBoolVariable("hypopg.enabled", "Enable / Disable hypopg", NULL, &hypo_is_enabled, true, PGC_USERSET, 0, NULL, NULL, NULL); DefineCustomBoolVariable("hypopg.use_real_oids", "Use real oids rather than the range < 16384", NULL, &hypo_use_real_oids, false, PGC_USERSET, 0, NULL, NULL, NULL); EmitWarningsOnPlaceholders("hypopg"); } /*--------------------------------- * Return a new OID for an hypothetical index. * * To avoid locking on pg_class (required to safely call GetNewOidWithIndex or * similar) and to be usable on a standby node, use the oids unused in the * FirstBootstrapObjectId / FirstNormalObjectId range rather than real oids. * For performance, always start with the biggest oid lesser than * FirstNormalObjectId. This way the loop to find an unused oid will only * happens once a single backend has created more than ~2.5k hypothetical * indexes. * * For people needing to have thousands of hypothetical indexes at the same * time, we also allow to use the initial implementation that relies on real * oids, which comes with all the limitations mentioned above. */ Oid hypo_getNewOid(Oid relid) { Oid newoid = InvalidOid; if (hypo_use_real_oids) { Relation pg_class; Relation relation; /* Open the relation on which we want a new OID */ relation = table_open(relid, AccessShareLock); /* Close the relation and release the lock now */ table_close(relation, AccessShareLock); /* Open pg_class to aks a new OID */ pg_class = table_open(RelationRelationId, RowExclusiveLock); /* ask for a new Oid */ newoid = GetNewOidWithIndex(pg_class, ClassOidIndexId, #if PG_VERSION_NUM < 120000 ObjectIdAttributeNumber #else Anum_pg_class_oid #endif ); /* Close pg_class and release the lock now */ table_close(pg_class, RowExclusiveLock); } else { /* * First, make sure we know what is the biggest oid smaller than * FirstNormalObjectId present in pg_class. This can never change so * we cache the value. */ if (!OidIsValid(min_fake_oid)) min_fake_oid = hypo_get_min_fake_oid(); Assert(OidIsValid(min_fake_oid)); /* Make sure there's enough room to get one more Oid */ if (list_length(hypoIndexes) >= (FirstNormalObjectId - min_fake_oid)) { ereport(ERROR, (errmsg("hypopg: not more oid available"), errhint("Remove hypothetical indexes " "or enable hypopg.use_real_oids"))); } while(!OidIsValid(newoid)) { CHECK_FOR_INTERRUPTS(); if (!OidIsValid(last_oid)) newoid = last_oid = min_fake_oid; else newoid = ++last_oid; /* Check if we just exceeded the fake oids range */ if (newoid >= FirstNormalObjectId) { newoid = min_fake_oid; last_oid = InvalidOid; oid_wraparound = true; } /* * If we already used all available fake oids, we have to make sure * that the oid isn't used anymore. */ if (oid_wraparound) { if (hypo_get_index(newoid) != NULL) { /* We can't use this oid. Reset newoid and start again */ newoid = InvalidOid; } } } } Assert(OidIsValid(newoid)); return newoid; } /* Reset the state of the fake oid generator. */ void hypo_reset_fake_oids(void) { Assert(hypoIndexes == NIL); last_oid = InvalidOid; oid_wraparound = false; } /* This function setup the "isExplain" flag for next hooks. * If this flag is setup, we can add hypothetical indexes. */ void hypo_utility_hook( #if PG_VERSION_NUM >= 100000 PlannedStmt *pstmt, #else Node *parsetree, #endif const char *queryString, #if PG_VERSION_NUM >= 140000 bool readOnlyTree, #endif #if PG_VERSION_NUM >= 90300 ProcessUtilityContext context, #endif ParamListInfo params, #if PG_VERSION_NUM >= 100000 QueryEnvironment *queryEnv, #endif #if PG_VERSION_NUM < 90300 bool isTopLevel, #endif DestReceiver *dest, #if PG_VERSION_NUM < 130000 char *completionTag #else QueryCompletion *qc #endif ) { isExplain = hypo_is_simple_explain( #if PG_VERSION_NUM >= 100000 (Node *) pstmt #else parsetree #endif ); if (prev_utility_hook) prev_utility_hook( #if PG_VERSION_NUM >= 100000 pstmt, #else parsetree, #endif queryString, #if PG_VERSION_NUM >= 140000 readOnlyTree, #endif #if PG_VERSION_NUM >= 90300 context, #endif params, #if PG_VERSION_NUM >= 100000 queryEnv, #endif #if PG_VERSION_NUM < 90300 isTopLevel, #endif dest, #if PG_VERSION_NUM < 130000 completionTag #else qc #endif ); else standard_ProcessUtility( #if PG_VERSION_NUM >= 100000 pstmt, #else parsetree, #endif queryString, #if PG_VERSION_NUM >= 140000 readOnlyTree, #endif #if PG_VERSION_NUM >= 90300 context, #endif params, #if PG_VERSION_NUM >= 100000 queryEnv, #endif #if PG_VERSION_NUM < 90300 isTopLevel, #endif dest, #if PG_VERSION_NUM < 130000 completionTag #else qc #endif ); } static bool hypo_index_match_table(hypoIndex *entry, Oid relid) { /* Hypothetical index on the exact same relation, use it. */ if (entry->relid == relid) return true; #if PG_VERSION_NUM >= 110000 /* * If the table is a partition, see if the hypothetical index belongs to * one of the partition parent. */ if (get_rel_relispartition(relid)) { List *parents = get_partition_ancestors(relid); ListCell *lc; foreach(lc, parents) { Oid oid = lfirst_oid(lc); if (oid == entry->relid) return true; } } #endif return false; } /* Detect if the current utility command is compatible with hypothetical indexes * i.e. an EXPLAIN, no ANALYZE */ static bool hypo_is_simple_explain(Node *parsetree) { if (parsetree == NULL) return false; #if PG_VERSION_NUM >= 100000 parsetree = ((PlannedStmt *) parsetree)->utilityStmt; if (parsetree == NULL) return false; #endif switch (nodeTag(parsetree)) { case T_ExplainStmt: { ListCell *lc; foreach(lc, ((ExplainStmt *) parsetree)->options) { DefElem *opt = (DefElem *) lfirst(lc); if (strcmp(opt->defname, "analyze") == 0) return false; } } return true; break; default: return false; } return false; } /* Reset the isExplain flag after each query */ static void hypo_executorEnd_hook(QueryDesc *queryDesc) { isExplain = false; if (prev_ExecutorEnd_hook) prev_ExecutorEnd_hook(queryDesc); else standard_ExecutorEnd(queryDesc); } /* * Return the minmum usable oid in the FirstBootstrapObjectId - * FirstNormalObjectId range. */ static Oid hypo_get_min_fake_oid(void) { int ret, nb; Oid oid = InvalidOid; /* * Connect to SPI manager */ if ((ret = SPI_connect()) < 0) /* internal error */ elog(ERROR, "SPI connect failure - returned %d", ret); ret = SPI_execute("SELECT max(oid)" " FROM pg_catalog.pg_class" " WHERE oid < " CppAsString2(FirstNormalObjectId), true, 1); nb = SPI_processed; if (ret != SPI_OK_SELECT || nb == 0) { SPI_finish(); elog(ERROR, "hypopg: could not find the minimum fake oid"); } oid = atooid(SPI_getvalue(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1)) + 1; /* release SPI related resources (and return to caller's context) */ SPI_finish(); Assert(OidIsValid(oid)); return oid; } /* * This function will execute the "hypo_injectHypotheticalIndex" for every * hypothetical index found for each relation if the isExplain flag is setup. */ static void hypo_get_relation_info_hook(PlannerInfo *root, Oid relationObjectId, bool inhparent, RelOptInfo *rel) { if (isExplain && hypo_is_enabled) { Relation relation; /* Open the current relation */ relation = table_open(relationObjectId, AccessShareLock); if (relation->rd_rel->relkind == RELKIND_RELATION #if PG_VERSION_NUM >= 90300 || relation->rd_rel->relkind == RELKIND_MATVIEW #endif ) { ListCell *lc; foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); if (hypo_index_match_table(entry, RelationGetRelid(relation))) { /* * hypothetical index found, add it to the relation's * indextlist */ hypo_injectHypotheticalIndex(root, relationObjectId, inhparent, rel, relation, entry); } } hypo_hideIndexes(rel); } /* Close the relation release the lock now */ table_close(relation, AccessShareLock); } if (prev_get_relation_info_hook) prev_get_relation_info_hook(root, relationObjectId, inhparent, rel); } /* * Reset statistics. */ PGDLLEXPORT Datum hypopg_reset(PG_FUNCTION_ARGS) { hypo_index_reset(); PG_RETURN_VOID(); } hypopg-1.4.0/hypopg.control000066400000000000000000000002241443433066400157370ustar00rootroot00000000000000# hypopg extension comment = 'Hypothetical indexes for PostgreSQL' default_version = '1.4.0' module_pathname = '$libdir/hypopg' relocatable = true hypopg-1.4.0/hypopg_index.c000066400000000000000000001775051443433066400157110ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_index.c: Implementation of hypothetical indexes for PostgreSQL * * This file contains all the internal code related to hypothetical indexes * support. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (C) 2015-2023: Julien Rouhaud * *------------------------------------------------------------------------- */ #include #include #include "postgres.h" #include "fmgr.h" #include "funcapi.h" #include "miscadmin.h" #if PG_VERSION_NUM >= 90500 #include "access/brin.h" #include "access/brin_page.h" #include "access/brin_tuple.h" #endif #include "access/gist.h" #if PG_VERSION_NUM >= 90300 #include "access/htup_details.h" #endif #include "access/nbtree.h" #include "access/reloptions.h" #include "access/spgist.h" #include "access/spgist_private.h" #include "access/sysattr.h" #include "access/xlog.h" #include "catalog/namespace.h" #include "catalog/pg_am.h" #include "catalog/pg_amproc.h" #include "catalog/pg_class.h" #include "catalog/pg_opclass.h" #include "catalog/pg_type.h" #include "commands/defrem.h" #if PG_VERSION_NUM >= 120000 #include "nodes/makefuncs.h" #endif #include "optimizer/clauses.h" #include "optimizer/cost.h" #include "optimizer/pathnode.h" #if PG_VERSION_NUM < 120000 #include "optimizer/var.h" #else #include "optimizer/optimizer.h" #endif #include "parser/parse_utilcmd.h" #include "parser/parser.h" #if PG_VERSION_NUM >= 120000 #include "port/pg_bitutils.h" #endif #include "storage/bufmgr.h" #include "utils/builtins.h" #include "utils/lsyscache.h" #include "utils/rel.h" #if PG_VERSION_NUM >= 90500 #include "utils/ruleutils.h" #endif #include "utils/syscache.h" #include "include/hypopg.h" #include "include/hypopg_index.h" #if PG_VERSION_NUM >= 90600 /* this will be updated, when needed, by hypo_discover_am */ static Oid BLOOM_AM_OID = InvalidOid; #endif /*--- Variables exported ---*/ explain_get_index_name_hook_type prev_explain_get_index_name_hook; List *hypoIndexes; List *hypoHiddenIndexes; /*--- Functions --- */ PG_FUNCTION_INFO_V1(hypopg); PG_FUNCTION_INFO_V1(hypopg_create_index); PG_FUNCTION_INFO_V1(hypopg_drop_index); PG_FUNCTION_INFO_V1(hypopg_relation_size); PG_FUNCTION_INFO_V1(hypopg_get_indexdef); PG_FUNCTION_INFO_V1(hypopg_reset_index); PG_FUNCTION_INFO_V1(hypopg_hide_index); PG_FUNCTION_INFO_V1(hypopg_unhide_index); PG_FUNCTION_INFO_V1(hypopg_unhide_all_indexes); PG_FUNCTION_INFO_V1(hypopg_hidden_indexes); static void hypo_addIndex(hypoIndex * entry); static bool hypo_can_return(hypoIndex * entry, Oid atttype, int i, char *amname); static void hypo_discover_am(char *amname, Oid oid); static void hypo_estimate_index_simple(hypoIndex * entry, BlockNumber *pages, double *tuples); static void hypo_estimate_index(hypoIndex * entry, RelOptInfo *rel); static int hypo_estimate_index_colsize(hypoIndex * entry, int col); static void hypo_index_pfree(hypoIndex * entry); static bool hypo_index_remove(Oid indexid); static bool hypo_index_unhide(Oid indexid); static const hypoIndex *hypo_index_store_parsetree(IndexStmt *node, const char *queryString); static hypoIndex * hypo_newIndex(Oid relid, char *accessMethod, int nkeycolumns, int ninccolumns, List *options); static void hypo_set_indexname(hypoIndex * entry, char *indexname); /* * palloc a new hypoIndex, and give it a new OID, and some other global stuff. * This function also parse index storage options (if any) to check if they're * valid. */ static hypoIndex * hypo_newIndex(Oid relid, char *accessMethod, int nkeycolumns, int ninccolumns, List *options) { /* must be declared "volatile", because used in a PG_CATCH() */ hypoIndex *volatile entry; MemoryContext oldcontext; HeapTuple tuple; Oid oid; #if PG_VERSION_NUM >= 90600 IndexAmRoutine *amroutine; amoptions_function amoptions; #else RegProcedure amoptions; #endif tuple = SearchSysCache1(AMNAME, PointerGetDatum(accessMethod)); if (!HeapTupleIsValid(tuple)) { ereport(ERROR, (errcode(ERRCODE_UNDEFINED_OBJECT), errmsg("hypopg: access method \"%s\" does not exist", accessMethod))); } #if PG_VERSION_NUM < 120000 oid = HeapTupleGetOid(tuple); #else oid = ((Form_pg_am) GETSTRUCT(tuple))->oid; #endif hypo_discover_am(accessMethod, oid); oldcontext = MemoryContextSwitchTo(HypoMemoryContext); entry = palloc0(sizeof(hypoIndex)); entry->relam = oid; #if PG_VERSION_NUM >= 90600 /* * Since 9.6, AM informations are available through an amhandler function, * returning an IndexAmRoutine containing what's needed. */ amroutine = GetIndexAmRoutine(((Form_pg_am) GETSTRUCT(tuple))->amhandler); entry->amcostestimate = amroutine->amcostestimate; entry->amcanreturn = amroutine->amcanreturn; entry->amcanorderbyop = amroutine->amcanorderbyop; entry->amoptionalkey = amroutine->amoptionalkey; entry->amsearcharray = amroutine->amsearcharray; entry->amsearchnulls = amroutine->amsearchnulls; entry->amhasgettuple = (amroutine->amgettuple != NULL); entry->amhasgetbitmap = (amroutine->amgetbitmap != NULL); entry->amcanunique = amroutine->amcanunique; entry->amcanmulticol = amroutine->amcanmulticol; amoptions = amroutine->amoptions; entry->amcanorder = amroutine->amcanorder; #if PG_VERSION_NUM >= 110000 entry->amcanparallel = amroutine->amcanparallel; entry->amcaninclude = amroutine->amcaninclude; #endif #else /* Up to 9.5, all information is available in the pg_am tuple */ entry->amcostestimate = ((Form_pg_am) GETSTRUCT(tuple))->amcostestimate; entry->amcanreturn = ((Form_pg_am) GETSTRUCT(tuple))->amcanreturn; entry->amcanorderbyop = ((Form_pg_am) GETSTRUCT(tuple))->amcanorderbyop; entry->amoptionalkey = ((Form_pg_am) GETSTRUCT(tuple))->amoptionalkey; entry->amsearcharray = ((Form_pg_am) GETSTRUCT(tuple))->amsearcharray; entry->amsearchnulls = ((Form_pg_am) GETSTRUCT(tuple))->amsearchnulls; entry->amhasgettuple = OidIsValid(((Form_pg_am) GETSTRUCT(tuple))->amgettuple); entry->amhasgetbitmap = OidIsValid(((Form_pg_am) GETSTRUCT(tuple))->amgetbitmap); entry->amcanunique = ((Form_pg_am) GETSTRUCT(tuple))->amcanunique; entry->amcanmulticol = ((Form_pg_am) GETSTRUCT(tuple))->amcanmulticol; amoptions = ((Form_pg_am) GETSTRUCT(tuple))->amoptions; entry->amcanorder = ((Form_pg_am) GETSTRUCT(tuple))->amcanorder; #endif ReleaseSysCache(tuple); entry->indexname = palloc0(NAMEDATALEN); /* palloc all arrays */ entry->indexkeys = palloc0(sizeof(short int) * (nkeycolumns + ninccolumns)); entry->indexcollations = palloc0(sizeof(Oid) * nkeycolumns); entry->opfamily = palloc0(sizeof(Oid) * nkeycolumns); entry->opclass = palloc0(sizeof(Oid) * nkeycolumns); entry->opcintype = palloc0(sizeof(Oid) * nkeycolumns); /* only palloc sort related fields if needed */ if ((entry->relam == BTREE_AM_OID) || (entry->amcanorder)) { if (entry->relam != BTREE_AM_OID) entry->sortopfamily = palloc0(sizeof(Oid) * nkeycolumns); entry->reverse_sort = palloc0(sizeof(bool) * nkeycolumns); entry->nulls_first = palloc0(sizeof(bool) * nkeycolumns); } else { entry->sortopfamily = NULL; entry->reverse_sort = NULL; entry->nulls_first = NULL; } #if PG_VERSION_NUM >= 90500 entry->canreturn = palloc0(sizeof(bool) * (nkeycolumns + ninccolumns)); #endif entry->indexprs = NIL; entry->indpred = NIL; entry->options = (List *) copyObject(options); MemoryContextSwitchTo(oldcontext); entry->oid = hypo_getNewOid(relid); entry->relid = relid; entry->immediate = true; if (options != NIL) { Datum reloptions; /* * Parse AM-specific options, convert to text array form, validate. */ reloptions = transformRelOptions((Datum) 0, options, NULL, NULL, false, false); (void) index_reloptions(amoptions, reloptions, true); } PG_TRY(); { /* * reject unsupported am. It could be done earlier but it's simpler * (and was previously done) here. */ if (entry->relam != BTREE_AM_OID #if PG_VERSION_NUM >= 90500 && entry->relam != BRIN_AM_OID #endif #if PG_VERSION_NUM >= 90600 && entry->relam != BLOOM_AM_OID #endif #if PG_VERSION_NUM >= 100000 /* * Only support hash indexes for pg10+. In previous version they * weren't crash safe, and changes in pg10+ also significantly * changed the disk space allocation. */ && entry->relam != HASH_AM_OID #endif ) { /* * do not store hypothetical indexes with access method not * supported */ elog(ERROR, "hypopg: access method \"%s\" is not supported", accessMethod); break; } /* No more elog beyond this point. */ } PG_CATCH(); { /* Free what was palloc'd in HypoMemoryContext */ hypo_index_pfree(entry); PG_RE_THROW(); } PG_END_TRY(); return entry; } /* Add an hypoIndex to hypoIndexes */ static void hypo_addIndex(hypoIndex * entry) { MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(HypoMemoryContext); hypoIndexes = lappend(hypoIndexes, entry); MemoryContextSwitchTo(oldcontext); } /* * Remove cleanly all hypothetical indexes by calling hypo_index_remove() on * each entry. hypo_index_remove() function pfree all allocated memory */ void hypo_index_reset(void) { ListCell *lc; /* * The cell is removed in hypo_index_remove(), so we can't iterate using * standard foreach / lnext macros. */ while ((lc = list_head(hypoIndexes)) != NULL) { hypoIndex *entry = (hypoIndex *) lfirst(lc); hypo_index_remove(entry->oid); } list_free(hypoIndexes); hypoIndexes = NIL; hypo_reset_fake_oids(); return; } /* * Create an hypothetical index from its CREATE INDEX parsetree. This function * is where all the hypothetic index creation is done, except the index size * estimation. */ static const hypoIndex * hypo_index_store_parsetree(IndexStmt *node, const char *queryString) { /* must be declared "volatile", because used in a PG_CATCH() */ hypoIndex *volatile entry; Form_pg_attribute attform; Oid relid; StringInfoData indexRelationName; int nkeycolumns, ninccolumns; ListCell *lc; int attn; /* * Support for hypothetical BRIN indexes is broken in some minor versions * of pg10, pg11 and pg12. For simplicity, check PG_VERSION_NUM rather * than the real instance version, which should be right most of the * time. When it's not, the only effect is to have a less user-friendly * error message. */ #if ((PG_VERSION_NUM >= 100000 && PG_VERSION_NUM < 100012) || \ (PG_VERSION_NUM >= 110000 && PG_VERSION_NUM < 110007) || \ (PG_VERSION_NUM >= 120000 && PG_VERSION_NUM < 120002)) if (get_am_oid(node->accessMethod, true) == BRIN_AM_OID) { elog(ERROR, "hypopg: BRIN hypothetical indexes are only supported" " with PostgreSQL " #if PG_VERSION_NUM >= 120000 "12.2" #else #if PG_VERSION_NUM >= 110000 "11.7" #else "10.12" #endif /* pg 11 */ #endif /* pg 12 */ " and later."); } #endif relid = RangeVarGetRelid(node->relation, AccessShareLock, false); /* Some sanity checks */ switch (get_rel_relkind(relid)) { #if PG_VERSION_NUM >= 90300 case RELKIND_MATVIEW: #endif #if PG_VERSION_NUM >= 110000 case RELKIND_PARTITIONED_TABLE: #endif case RELKIND_RELATION: /* this is supported */ break; #if PG_VERSION_NUM >= 100000 && PG_VERSION_NUM < 110000 case RELKIND_PARTITIONED_TABLE: elog(ERROR, "hypopg: cannot create hypothetical index on" " partitioned table \"%s\"", node->relation->relname); break; #endif default: #if PG_VERSION_NUM >= 90300 elog(ERROR, "hypopg: \"%s\" is not a table or materialized view", node->relation->relname); #else elog(ERROR, "hypopg: \"%s\" is not a table", node->relation->relname); #endif } /* Run parse analysis ... */ node = transformIndexStmt(relid, node, queryString); nkeycolumns = list_length(node->indexParams); #if PG_VERSION_NUM >= 110000 if (list_intersection(node->indexParams, node->indexIncludingParams) != NIL) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: included columns must not intersect with key columns"))); ninccolumns = list_length(node->indexIncludingParams); #else ninccolumns = 0; #endif if (nkeycolumns > INDEX_MAX_KEYS) elog(ERROR, "hypopg: cannot use more thant %d columns in an index", INDEX_MAX_KEYS); initStringInfo(&indexRelationName); appendStringInfo(&indexRelationName, "%s", node->accessMethod); appendStringInfo(&indexRelationName, "_"); if (node->relation->schemaname != NULL && (strcmp(node->relation->schemaname, "public") != 0)) { appendStringInfo(&indexRelationName, "%s", node->relation->schemaname); appendStringInfo(&indexRelationName, "_"); } appendStringInfo(&indexRelationName, "%s", node->relation->relname); /* now create the hypothetical index entry */ entry = hypo_newIndex(relid, node->accessMethod, nkeycolumns, ninccolumns, node->options); PG_TRY(); { HeapTuple tuple; int ind_avg_width = 0; if (node->unique && !entry->amcanunique) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: access method \"%s\" does not support unique indexes", node->accessMethod))); if (nkeycolumns > 1 && !entry->amcanmulticol) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: access method \"%s\" does not support multicolumn indexes", node->accessMethod))); #if PG_VERSION_NUM >= 110000 if (node-> indexIncludingParams != NIL && !entry->amcaninclude) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: access method \"%s\" does not support included columns", node->accessMethod))); #endif entry->unique = node->unique; entry->ncolumns = nkeycolumns + ninccolumns; entry->nkeycolumns = nkeycolumns; /* handle predicate if present */ if (node->whereClause) { MemoryContext oldcontext; List *pred; CheckPredicate((Expr *) node->whereClause); pred = make_ands_implicit((Expr *) node->whereClause); oldcontext = MemoryContextSwitchTo(HypoMemoryContext); entry->indpred = (List *) copyObject(pred); MemoryContextSwitchTo(oldcontext); } else { entry->indpred = NIL; } /* * process attributeList */ attn = 0; foreach(lc, node->indexParams) { IndexElem *attribute = (IndexElem *) lfirst(lc); Oid atttype = InvalidOid; Oid opclass; appendStringInfo(&indexRelationName, "_"); /* * Process the column-or-expression to be indexed. */ if (attribute->name != NULL) { /* Simple index attribute */ appendStringInfo(&indexRelationName, "%s", attribute->name); /* get the attribute catalog info */ tuple = SearchSysCacheAttName(relid, attribute->name); if (!HeapTupleIsValid(tuple)) { elog(ERROR, "hypopg: column \"%s\" does not exist", attribute->name); } attform = (Form_pg_attribute) GETSTRUCT(tuple); /* setup the attnum */ entry->indexkeys[attn] = attform->attnum; /* setup the collation */ entry->indexcollations[attn] = attform->attcollation; /* get the atttype */ atttype = attform->atttypid; ReleaseSysCache(tuple); } else { /*--------------------------- * handle index on expression * * Adapted from DefineIndex() and ComputeIndexAttrs() * * Statistics on expression index will be really wrong, since * they're only computed when a real index exists (selectivity * and average width). */ MemoryContext oldcontext; Node *expr = attribute->expr; Assert(expr != NULL); entry->indexcollations[attn] = exprCollation(attribute->expr); atttype = exprType(attribute->expr); appendStringInfo(&indexRelationName, "expr"); /* * Strip any top-level COLLATE clause. This ensures that we * treat "x COLLATE y" and "(x COLLATE y)" alike. */ while (IsA(expr, CollateExpr)) expr = (Node *) ((CollateExpr *) expr)->arg; if (IsA(expr, Var) && ((Var *) expr)->varattno != InvalidAttrNumber) { /* * User wrote "(column)" or "(column COLLATE something)". * Treat it like simple attribute anyway. */ entry->indexkeys[attn] = ((Var *) expr)->varattno; /* * Generated index name will have _expr instead of attname * in generated index name, and error message will also be * slighty different in case on unexisting column from a * simple attribute, but that's how ComputeIndexAttrs() * proceed. */ } else { /* * transformExpr() should have already rejected * subqueries, aggregates, and window functions, based on * the EXPR_KIND_ for an index expression. */ /* * An expression using mutable functions is probably * wrong, since if you aren't going to get the same result * for the same data every time, it's not clear what the * index entries mean at all. */ if (CheckMutability((Expr *) expr)) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: functions in index expression must be marked IMMUTABLE"))); entry->indexkeys[attn] = 0; /* marks expression */ oldcontext = MemoryContextSwitchTo(HypoMemoryContext); entry->indexprs = lappend(entry->indexprs, (Node *) copyObject(attribute->expr)); MemoryContextSwitchTo(oldcontext); } } ind_avg_width += hypo_estimate_index_colsize(entry, attn); /* * Apply collation override if any */ if (attribute->collation) entry->indexcollations[attn] = get_collation_oid(attribute->collation, false); /* * Check we have a collation iff it's a collatable type. The only * expected failures here are (1) COLLATE applied to a * noncollatable type, or (2) index expression had an unresolved * collation. But we might as well code this to be a complete * consistency check. */ if (type_is_collatable(atttype)) { if (!OidIsValid(entry->indexcollations[attn])) ereport(ERROR, (errcode(ERRCODE_INDETERMINATE_COLLATION), errmsg("hypopg: could not determine which collation to use for index expression"), errhint("Use the COLLATE clause to set the collation explicitly."))); } else { if (OidIsValid(entry->indexcollations[attn])) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("hypopg: collations are not supported by type %s", format_type_be(atttype)))); } /* get the opclass */ #if PG_VERSION_NUM < 100000 opclass = GetIndexOpClass(attribute->opclass, atttype, node->accessMethod, entry->relam); #else opclass = ResolveOpClass(attribute->opclass, atttype, node->accessMethod, entry->relam); #endif entry->opclass[attn] = opclass; /* setup the opfamily */ entry->opfamily[attn] = get_opclass_family(opclass); entry->opcintype[attn] = get_opclass_input_type(opclass); /* setup the sort info if am handles it */ if (entry->amcanorder) { /* setup NULLS LAST, NULLS FIRST cases are handled below */ entry->nulls_first[attn] = false; /* default ordering is ASC */ entry->reverse_sort[attn] = (attribute->ordering == SORTBY_DESC); /* default null ordering is LAST for ASC, FIRST for DESC */ if (attribute->nulls_ordering == SORTBY_NULLS_DEFAULT) { if (attribute->ordering == SORTBY_DESC) entry->nulls_first[attn] = true; } else if (attribute->nulls_ordering == SORTBY_NULLS_FIRST) entry->nulls_first[attn] = true; } /* handle index-only scan info */ #if PG_VERSION_NUM < 90500 /* * OIS info is global for the index before 9.5, so look for the * information only once in that case. */ if (attn == 0) { /* * specify first column, but it doesn't matter as this will * only be used with GiST am, which cannot do IOS prior pg 9.5 */ entry->canreturn = hypo_can_return(entry, atttype, 0, node->accessMethod); } #else /* per-column IOS information */ entry->canreturn[attn] = hypo_can_return(entry, atttype, attn, node->accessMethod); #endif attn++; } Assert(attn == nkeycolumns); /* * We disallow indexes on system columns other than OID. They would * not necessarily get updated correctly, and they don't seem useful * anyway. */ for (attn = 0; attn < nkeycolumns; attn++) { AttrNumber attno = entry->indexkeys[attn]; if (attno < 0 #if PG_VERSION_NUM < 120000 && attno != ObjectIdAttributeNumber #endif ) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: index creation on system columns is not supported"))); } #if PG_VERSION_NUM >= 110000 attn = nkeycolumns; foreach(lc, node->indexIncludingParams) { IndexElem *attribute = (IndexElem *) lfirst(lc); Oid atttype = InvalidOid; appendStringInfo(&indexRelationName, "_"); /* Handle not supported features as in ComputeIndexAttrs() */ if (attribute->collation) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: including column does not support a collation"))); if (attribute->opclass) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: including column does not support an operator class"))); if (attribute->ordering != SORTBY_DEFAULT) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: including column does not support ASC/DESC options"))); if (attribute->nulls_ordering != SORTBY_NULLS_DEFAULT) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("hypopg: including column does not support NULLS FIRST/LAST options"))); /* * Process the column-or-expression to be indexed. */ if (attribute->name != NULL) { /* Simple index attribute */ appendStringInfo(&indexRelationName, "%s", attribute->name); /* get the attribute catalog info */ tuple = SearchSysCacheAttName(relid, attribute->name); if (!HeapTupleIsValid(tuple)) { elog(ERROR, "hypopg: column \"%s\" does not exist", attribute->name); } attform = (Form_pg_attribute) GETSTRUCT(tuple); /* setup the attnum */ entry->indexkeys[attn] = attform->attnum; /* get the atttype */ atttype = attform->atttypid; ReleaseSysCache(tuple); } else { ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: expressions are not supported in included columns"))); } ind_avg_width += hypo_estimate_index_colsize(entry, attn); /* per-column IOS information */ entry->canreturn[attn] = hypo_can_return(entry, atttype, attn, node->accessMethod); attn++; } Assert(attn == (nkeycolumns + ninccolumns)); #endif /* * Also check for system columns used in expressions or predicates. */ if (entry->indexprs || entry->indpred) { Bitmapset *indexattrs = NULL; int i; pull_varattnos((Node *) entry->indexprs, 1, &indexattrs); pull_varattnos((Node *) entry->indpred, 1, &indexattrs); for (i = FirstLowInvalidHeapAttributeNumber + 1; i < 0; i++) { if ( #if PG_VERSION_NUM < 120000 i != ObjectIdAttributeNumber && #endif bms_is_member(i - FirstLowInvalidHeapAttributeNumber, indexattrs)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: index creation on system columns is not supported"))); } } /* Check if the average size fits in a btree index */ if (entry->relam == BTREE_AM_OID) { if (ind_avg_width >= HYPO_BTMaxItemSize) ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("hypopg: estimated index row size %d " "exceeds maximum %ld", ind_avg_width, HYPO_BTMaxItemSize), errhint("Values larger than 1/3 of a buffer page " "cannot be indexed.\nConsider a function index " " of an MD5 hash of the value, or use full text " "indexing\n(which is not yet supported by hypopg)." ))); /* Warn about posssible error with a 80% avg size */ else if (ind_avg_width >= HYPO_BTMaxItemSize * .8) ereport(WARNING, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("hypopg: estimated index row size %d " "is close to maximum %ld", ind_avg_width, HYPO_BTMaxItemSize), errhint("Values larger than 1/3 of a buffer page " "cannot be indexed.\nConsider a function index " " of an MD5 hash of the value, or use full text " "indexing\n(which is not yet supported by hypopg)." ))); } /* No more elog beyond this point. */ } PG_CATCH(); { /* Free what was palloc'd in HypoMemoryContext */ hypo_index_pfree(entry); PG_RE_THROW(); } PG_END_TRY(); /* * Fetch the ordering information for the index, if any. Adapted from * plancat.c - get_relation_info(). */ if ((entry->relam != BTREE_AM_OID) && entry->amcanorder) { /* * Otherwise, identify the corresponding btree opfamilies by trying to * map this index's "<" operators into btree. Since "<" uniquely * defines the behavior of a sort order, this is a sufficient test. * * XXX This method is rather slow and also requires the undesirable * assumption that the other index AM numbers its strategies the same * as btree. It'd be better to have a way to explicitly declare the * corresponding btree opfamily for each opfamily of the other index * type. But given the lack of current or foreseeable amcanorder * index types, it's not worth expending more effort on now. */ for (attn = 0; attn < nkeycolumns; attn++) { Oid ltopr; Oid btopfamily; Oid btopcintype; int16 btstrategy; ltopr = get_opfamily_member(entry->opfamily[attn], entry->opcintype[attn], entry->opcintype[attn], BTLessStrategyNumber); if (OidIsValid(ltopr) && get_ordering_op_properties(ltopr, &btopfamily, &btopcintype, &btstrategy) && btopcintype == entry->opcintype[attn] && btstrategy == BTLessStrategyNumber) { /* Successful mapping */ entry->sortopfamily[attn] = btopfamily; } else { /* Fail ... quietly treat index as unordered */ /* also pfree allocated memory */ pfree(entry->sortopfamily); pfree(entry->reverse_sort); pfree(entry->nulls_first); entry->sortopfamily = NULL; entry->reverse_sort = NULL; entry->nulls_first = NULL; break; } } } hypo_set_indexname(entry, indexRelationName.data); hypo_addIndex(entry); return entry; } /* * Remove an hypothetical index from the list of hypothetical indexes. * pfree (by calling hypo_index_pfree) all memory that has been allocated. */ static bool hypo_index_remove(Oid indexid) { ListCell *lc; /* remove this index from the list of hidden indexes if present */ hypo_index_unhide(indexid); foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); if (entry->oid == indexid) { hypoIndexes = list_delete_ptr(hypoIndexes, entry); hypo_index_pfree(entry); return true; } } return false; } /* pfree all allocated memory for within an hypoIndex and the entry itself. */ static void hypo_index_pfree(hypoIndex * entry) { /* pfree all memory that has been allocated */ pfree(entry->indexname); pfree(entry->indexkeys); pfree(entry->indexcollations); pfree(entry->opfamily); pfree(entry->opclass); pfree(entry->opcintype); if ((entry->relam == BTREE_AM_OID) || entry->amcanorder) { if ((entry->relam != BTREE_AM_OID) && entry->sortopfamily) pfree(entry->sortopfamily); if (entry->reverse_sort) pfree(entry->reverse_sort); if (entry->nulls_first) pfree(entry->nulls_first); } if (entry->indexprs) list_free_deep(entry->indexprs); if (entry->indpred) pfree(entry->indpred); #if PG_VERSION_NUM >= 90500 pfree(entry->canreturn); #endif /* finally pfree the entry */ pfree(entry); } /*-------------------------------------------------- * Add an hypothetical index to the list of indexes. * Caller should have check that the specified hypoIndex does belong to the * specified relation. This function also assume that the specified entry * already contains every needed information, so we just basically need to copy * it from the hypoIndex to the new IndexOptInfo. Every specific handling is * done at store time (ie. hypo_index_store_parsetree). The only exception is * the size estimation, recomputed verytime, as it needs up to date statistics. */ void hypo_injectHypotheticalIndex(PlannerInfo *root, Oid relationObjectId, bool inhparent, RelOptInfo *rel, Relation relation, hypoIndex * entry) { IndexOptInfo *index; int ncolumns, /* * For convenience and readability, use nkeycolumns even for pg10- * version. In this case, this var will be initialized to ncolumns */ nkeycolumns, i; /* create a node */ index = makeNode(IndexOptInfo); index->relam = entry->relam; /* General stuff */ index->indexoid = entry->oid; index->reltablespace = rel->reltablespace; /* same tablespace as * relation, TODO */ index->rel = rel; index->ncolumns = ncolumns = entry->ncolumns; #if PG_VERSION_NUM >= 110000 index->nkeycolumns = nkeycolumns = entry->nkeycolumns; #else nkeycolumns = ncolumns; #endif index->indexkeys = (int *) palloc(sizeof(int) * ncolumns); index->indexcollations = (Oid *) palloc(sizeof(int) * nkeycolumns); index->opfamily = (Oid *) palloc(sizeof(int) * nkeycolumns); index->opcintype = (Oid *) palloc(sizeof(int) * nkeycolumns); if ((index->relam == BTREE_AM_OID) || entry->amcanorder) { if (index->relam != BTREE_AM_OID) index->sortopfamily = palloc0(sizeof(Oid) * nkeycolumns); index->reverse_sort = (bool *) palloc(sizeof(bool) * nkeycolumns); index->nulls_first = (bool *) palloc(sizeof(bool) * nkeycolumns); } else { index->sortopfamily = NULL; index->reverse_sort = NULL; index->nulls_first = NULL; } #if PG_VERSION_NUM >= 90500 index->canreturn = (bool *) palloc(sizeof(bool) * ncolumns); #endif for (i = 0; i < ncolumns; i++) { index->indexkeys[i] = entry->indexkeys[i]; #if PG_VERSION_NUM >= 90500 index->canreturn[i] = entry->canreturn[i]; #endif } for (i = 0; i < nkeycolumns; i++) { index->indexcollations[i] = entry->indexcollations[i]; index->opfamily[i] = entry->opfamily[i]; index->opcintype[i] = entry->opcintype[i]; } /* * Fetch the ordering information for the index, if any. This is handled * in hypo_index_store_parsetree(). Again, adapted from plancat.c - * get_relation_info() */ if (entry->relam == BTREE_AM_OID) { /* * If it's a btree index, we can use its opfamily OIDs directly as the * sort ordering opfamily OIDs. */ index->sortopfamily = index->opfamily; for (i = 0; i < nkeycolumns; i++) { index->reverse_sort[i] = entry->reverse_sort[i]; index->nulls_first[i] = entry->nulls_first[i]; } } else if (entry->amcanorder) { if (entry->sortopfamily) { for (i = 0; i < nkeycolumns; i++) { index->sortopfamily[i] = entry->sortopfamily[i]; index->reverse_sort[i] = entry->reverse_sort[i]; index->nulls_first[i] = entry->nulls_first[i]; } } else { index->sortopfamily = NULL; index->reverse_sort = NULL; index->nulls_first = NULL; } } index->unique = entry->unique; index->amcostestimate = entry->amcostestimate; index->immediate = entry->immediate; #if PG_VERSION_NUM < 90500 index->canreturn = entry->canreturn; #endif index->amcanorderbyop = entry->amcanorderbyop; index->amoptionalkey = entry->amoptionalkey; index->amsearcharray = entry->amsearcharray; index->amsearchnulls = entry->amsearchnulls; index->amhasgettuple = entry->amhasgettuple; index->amhasgetbitmap = entry->amhasgetbitmap; #if PG_VERSION_NUM >= 110000 index->amcanparallel = entry->amcanparallel; #endif /* these has already been handled in hypo_index_store_parsetree() if any */ index->indexprs = list_copy(entry->indexprs); index->indpred = list_copy(entry->indpred); index->predOK = false; /* will be set later in indxpath.c */ /* * Build targetlist using the completed indexprs data. copied from * PostgreSQL */ index->indextlist = build_index_tlist(root, index, relation); /* * estimate most of the hypothyetical index stuff, more exactly: tuples, * pages and tree_height (9.3+) */ hypo_estimate_index(entry, rel); index->pages = entry->pages; index->tuples = entry->tuples; #if PG_VERSION_NUM >= 90300 index->tree_height = entry->tree_height; #endif /* * obviously, setup this tag. However, it's only checked in * selfuncs.c/get_actual_variable_range, so we still need to add * hypothetical indexes *ONLY* in an explain-no-analyze command. */ index->hypothetical = true; /* add our hypothetical index in the relation's indexlist */ rel->indexlist = lcons(index, rel->indexlist); } /* * Return the stored hypothetical index for a given oid if any, NULL otherwise */ hypoIndex * hypo_get_index(Oid indexId) { ListCell *lc; foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); if (entry->oid == indexId) return entry; } return NULL; } /* Return the hypothetical index name ifs indexId is ours, NULL otherwise, as * this is what explain_get_index_name expects to continue his job. */ const char * hypo_explain_get_index_name_hook(Oid indexId) { if (isExplain) { hypoIndex *index = NULL; index = hypo_get_index(indexId); if (index) return index->indexname; } if (prev_explain_get_index_name_hook) return prev_explain_get_index_name_hook(indexId); return NULL; } /* * List created hypothetical indexes */ Datum hypopg(PG_FUNCTION_ARGS) { ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo; MemoryContext per_query_ctx; MemoryContext oldcontext; TupleDesc tupdesc; Tuplestorestate *tupstore; ListCell *lc; Datum predDatum; /* check to see if caller supports us returning a tuplestore */ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that cannot accept a set"))); if (!(rsinfo->allowedModes & SFRM_Materialize)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("materialize mode required, but it is not " \ "allowed in this context"))); per_query_ctx = rsinfo->econtext->ecxt_per_query_memory; oldcontext = MemoryContextSwitchTo(per_query_ctx); /* Build a tuple descriptor for our result type */ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) elog(ERROR, "return type must be a row type"); tupstore = tuplestore_begin_heap(true, false, work_mem); rsinfo->returnMode = SFRM_Materialize; rsinfo->setResult = tupstore; rsinfo->setDesc = tupdesc; MemoryContextSwitchTo(oldcontext); foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); Datum values[HYPO_INDEX_NB_COLS]; bool nulls[HYPO_INDEX_NB_COLS]; ListCell *lc2; StringInfoData exprsString; int i = 0; memset(values, 0, sizeof(values)); memset(nulls, 0, sizeof(nulls)); values[i++] = CStringGetTextDatum(entry->indexname); values[i++] = ObjectIdGetDatum(entry->oid); values[i++] = ObjectIdGetDatum(entry->relid); values[i++] = Int8GetDatum(entry->ncolumns); values[i++] = BoolGetDatum(entry->unique); values[i++] = PointerGetDatum(buildint2vector(entry->indexkeys, entry->ncolumns)); values[i++] = PointerGetDatum(buildoidvector(entry->indexcollations, entry->ncolumns)); values[i++] = PointerGetDatum(buildoidvector(entry->opclass, entry->ncolumns)); nulls[i++] = true; /* no indoption for now, TODO */ /* get each of indexprs, if any */ initStringInfo(&exprsString); foreach(lc2, entry->indexprs) { Node *expr = lfirst(lc2); appendStringInfo(&exprsString, "%s", nodeToString(expr)); } if (exprsString.len == 0) nulls[i++] = true; else values[i++] = CStringGetTextDatum(exprsString.data); pfree(exprsString.data); /* * Convert the index predicate (if any) to a text datum. Note we * convert implicit-AND format to normal explicit-AND for storage. */ if (entry->indpred != NIL) { char *predString; predString = nodeToString(make_ands_explicit(entry->indpred)); predDatum = CStringGetTextDatum(predString); pfree(predString); values[i++] = predDatum; } else nulls[i++] = true; values[i++] = ObjectIdGetDatum(entry->relam); Assert(i == HYPO_INDEX_NB_COLS); tuplestore_putvalues(tupstore, tupdesc, values, nulls); } /* clean up and return the tuplestore */ tuplestore_donestoring(tupstore); return (Datum) 0; } /* * SQL wrapper to create an hypothetical index with his parsetree */ Datum hypopg_create_index(PG_FUNCTION_ARGS) { char *sql = TextDatumGetCString(PG_GETARG_DATUM(0)); List *parsetree_list; ListCell *parsetree_item; ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo; MemoryContext per_query_ctx; MemoryContext oldcontext; TupleDesc tupdesc; Tuplestorestate *tupstore; int i = 1; /* check to see if caller supports us returning a tuplestore */ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that cannot accept a set"))); if (!(rsinfo->allowedModes & SFRM_Materialize)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("materialize mode required, but it is not " \ "allowed in this context"))); per_query_ctx = rsinfo->econtext->ecxt_per_query_memory; oldcontext = MemoryContextSwitchTo(per_query_ctx); /* Build a tuple descriptor for our result type */ if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) elog(ERROR, "return type must be a row type"); tupstore = tuplestore_begin_heap(true, false, work_mem); rsinfo->returnMode = SFRM_Materialize; rsinfo->setResult = tupstore; rsinfo->setDesc = tupdesc; MemoryContextSwitchTo(oldcontext); parsetree_list = pg_parse_query(sql); foreach(parsetree_item, parsetree_list) { Node *parsetree = (Node *) lfirst(parsetree_item); Datum values[HYPO_INDEX_CREATE_COLS]; bool nulls[HYPO_INDEX_CREATE_COLS]; const hypoIndex *entry; memset(values, 0, sizeof(values)); memset(nulls, 0, sizeof(nulls)); #if PG_VERSION_NUM >= 100000 parsetree = ((RawStmt *) parsetree)->stmt; #endif if (nodeTag(parsetree) != T_IndexStmt) { elog(WARNING, "hypopg: SQL order #%d is not a CREATE INDEX statement", i); } else { entry = hypo_index_store_parsetree((IndexStmt *) parsetree, sql); if (entry != NULL) { values[0] = ObjectIdGetDatum(entry->oid); values[1] = CStringGetTextDatum(entry->indexname); tuplestore_putvalues(tupstore, tupdesc, values, nulls); } } i++; } /* clean up and return the tuplestore */ tuplestore_donestoring(tupstore); return (Datum) 0; } /* * SQL wrapper to drop an hypothetical index. */ Datum hypopg_drop_index(PG_FUNCTION_ARGS) { Oid indexid = PG_GETARG_OID(0); PG_RETURN_BOOL(hypo_index_remove(indexid)); } /* * SQL Wrapper around the hypothetical index size estimation */ Datum hypopg_relation_size(PG_FUNCTION_ARGS) { BlockNumber pages; double tuples; Oid indexid = PG_GETARG_OID(0); ListCell *lc; bool found = false; pages = 0; tuples = 0; foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); if (entry->oid == indexid) { hypo_estimate_index_simple(entry, &pages, &tuples); found = true; break; } } if (!found) elog(ERROR, "oid %u is not a hypothetical index", indexid); PG_RETURN_INT64(pages * 1.0L * BLCKSZ); } /* * Deparse an hypoIndex, indentified by its indexid to the actual CREATE INDEX * command. * * Heavilty inspired on pg_get_indexdef_worker() */ Datum hypopg_get_indexdef(PG_FUNCTION_ARGS) { Oid indexid = PG_GETARG_OID(0); ListCell *indexpr_item; StringInfoData buf; hypoIndex *entry = NULL; ListCell *lc; List *context; int keyno; foreach(lc, hypoIndexes) { entry = (hypoIndex *) lfirst(lc); if (entry->oid == indexid) break; } if (!entry || entry->oid != indexid) PG_RETURN_NULL(); initStringInfo(&buf); appendStringInfo(&buf, "CREATE %s ON %s.%s USING %s (", (entry->unique ? "UNIQUE INDEX" : "INDEX"), quote_identifier(get_namespace_name(get_rel_namespace(entry->relid))), quote_identifier(get_rel_name(entry->relid)), get_am_name(entry->relam)); indexpr_item = list_head(entry->indexprs); context = deparse_context_for(get_rel_name(entry->relid), entry->relid); for (keyno = 0; keyno < entry->nkeycolumns; keyno++) { Oid indcoll; Oid keycoltype; Oid keycolcollation; char *str; if (keyno != 0) appendStringInfo(&buf, ", "); if (entry->indexkeys[keyno] != 0) { int32 keycoltypmod; #if PG_VERSION_NUM >= 110000 appendStringInfo(&buf, "%s", get_attname(entry->relid, entry->indexkeys[keyno], false)); #else appendStringInfo(&buf, "%s", get_attname(entry->relid, entry->indexkeys[keyno])); #endif get_atttypetypmodcoll(entry->relid, entry->indexkeys[keyno], &keycoltype, &keycoltypmod, &keycolcollation); } else { /* expressional index */ Node *indexkey; if (indexpr_item == NULL) elog(ERROR, "too few entries in indexprs list"); indexkey = (Node *) lfirst(indexpr_item); indexpr_item = lnext(entry->indexprs, indexpr_item); /* Deparse */ str = deparse_expression(indexkey, context, false, false); /* Need parens if it's not a bare function call */ if (indexkey && IsA(indexkey, FuncExpr) && ((FuncExpr *) indexkey)->funcformat == COERCE_EXPLICIT_CALL) appendStringInfoString(&buf, str); else appendStringInfo(&buf, "(%s)", str); keycoltype = exprType(indexkey); keycolcollation = exprCollation(indexkey); } /* Add collation, if not default for column */ indcoll = entry->indexcollations[keyno]; if (OidIsValid(indcoll) && indcoll != keycolcollation) appendStringInfo(&buf, " COLLATE %s", generate_collation_name((indcoll))); /* Add the operator class name, if not default */ get_opclass_name(entry->opclass[keyno], entry->opcintype[keyno], &buf); /* Add options if relevant */ if (entry->amcanorder) { /* if it supports sort ordering, report DESC and NULLS opts */ if (entry->reverse_sort[keyno]) { appendStringInfoString(&buf, " DESC"); /* NULLS FIRST is the default in this case */ if (!(entry->nulls_first[keyno])) appendStringInfoString(&buf, " NULLS LAST"); } else { if (entry->nulls_first[keyno]) appendStringInfoString(&buf, " NULLS FIRST"); } } } appendStringInfo(&buf, ")"); #if PG_VERSION_NUM >= 110000 Assert(entry->ncolumns >= entry->nkeycolumns); if (entry->ncolumns > entry->nkeycolumns) { appendStringInfo(&buf, " INCLUDE ("); for (keyno = entry->nkeycolumns; keyno < entry->ncolumns; keyno++) { if (keyno != entry->nkeycolumns) appendStringInfo(&buf, ", "); appendStringInfo(&buf, "%s", get_attname(entry->relid, entry->indexkeys[keyno], false)); } appendStringInfo(&buf, ")"); } #endif if (entry->options) { appendStringInfo(&buf, " WITH ("); foreach(lc, entry->options) { DefElem *elem = (DefElem *) lfirst(lc); appendStringInfo(&buf, "%s = ", elem->defname); if (strcmp(elem->defname, "fillfactor") == 0) appendStringInfo(&buf, "%d", (int32) intVal(elem->arg)); else if (strcmp(elem->defname, "pages_per_range") == 0) appendStringInfo(&buf, "%d", (int32) intVal(elem->arg)); else if (strcmp(elem->defname, "length") == 0) appendStringInfo(&buf, "%d", (int32) intVal(elem->arg)); else elog(WARNING, " hypopg: option %s unhandled, please report the bug", elem->defname); } appendStringInfo(&buf, ")"); } if (entry->indpred) { appendStringInfo(&buf, " WHERE %s", deparse_expression((Node *) make_ands_explicit(entry->indpred), context, false, false)); } PG_RETURN_TEXT_P(cstring_to_text(buf.data)); } /* * SQL wrapper to remove all declared hypothetical indexes. */ Datum hypopg_reset_index(PG_FUNCTION_ARGS) { hypo_index_reset(); PG_RETURN_VOID(); } /* * Add the given oid for the list of hidden indexes * if it's a valid index (hypothetical or real), and if not hidden already. * Return true if the oid is added to the list, false otherwise. */ Datum hypopg_hide_index(PG_FUNCTION_ARGS) { Oid indexid = PG_GETARG_OID(0); MemoryContext old_context; bool is_hypo = false; ListCell *lc; /* first check if it is in hypoIndexes */ foreach(lc, hypoIndexes) { hypoIndex *entry = (hypoIndex *) lfirst(lc); if (entry->oid == indexid) { is_hypo = true; break; } } if (!is_hypo) { HeapTuple index_tup = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(indexid)); if (!HeapTupleIsValid(index_tup)) return false; ReleaseSysCache(index_tup); } if (list_member_oid(hypoHiddenIndexes, indexid)) return false; old_context = MemoryContextSwitchTo(HypoMemoryContext); hypoHiddenIndexes = lappend_oid(hypoHiddenIndexes, indexid); MemoryContextSwitchTo(old_context); return true; } /* * Unhide the given index oid (hypothetical or not) to make it visible to * the planner again. */ Datum hypopg_unhide_index(PG_FUNCTION_ARGS) { Oid indexid = PG_GETARG_OID(0); PG_RETURN_BOOL(hypo_index_unhide(indexid)); } /* * Restore all hidden index. */ Datum hypopg_unhide_all_indexes(PG_FUNCTION_ARGS) { list_free(hypoHiddenIndexes); hypoHiddenIndexes = NIL; PG_RETURN_VOID(); } /* * Get all hidden index oid. */ Datum hypopg_hidden_indexes(PG_FUNCTION_ARGS) { ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo; MemoryContext oldcontext; TupleDesc tupdesc; Tuplestorestate *tupstore; ListCell *lc; /* check to see if caller supports us returning a tuplestore */ if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("set-valued function called in context that cannot accept a set"))); if (!(rsinfo->allowedModes & SFRM_Materialize)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("materialize mode required, but it is not " \ "allowed in this context"))); oldcontext = MemoryContextSwitchTo(rsinfo->econtext->ecxt_per_query_memory); tupdesc = CreateTemplateTupleDesc(1 #if PG_VERSION_NUM < 120000 , false #endif ); TupleDescInitEntry(tupdesc, (AttrNumber) 1, "indexid", OIDOID, -1, 0); tupstore = tuplestore_begin_heap(true, false, work_mem); rsinfo->returnMode = SFRM_Materialize; rsinfo->setResult = tupstore; rsinfo->setDesc = tupdesc; MemoryContextSwitchTo(oldcontext); foreach(lc, hypoHiddenIndexes) { Oid indexid = lfirst_oid(lc); Datum values[HYPO_HIDDEN_INDEX_COLS]; bool nulls[HYPO_HIDDEN_INDEX_COLS]; memset(values, 0, sizeof(values)); memset(nulls, 0, sizeof(nulls)); values[0] = ObjectIdGetDatum(indexid); tuplestore_putvalues(tupstore, tupdesc, values, nulls); } /* clean up and return the tuplestore */ tuplestore_donestoring(tupstore); return (Datum) 0; } /* * Remove the oid to restore this index on EXPLAIN. */ bool hypo_index_unhide(Oid indexid) { int prev_length = list_length(hypoHiddenIndexes); hypoHiddenIndexes = list_delete_oid(hypoHiddenIndexes, indexid); return prev_length > list_length(hypoHiddenIndexes); } /* * Check rel and delete the same oid index as hypoHiddenIndexes * in rel->indexlist. */ void hypo_hideIndexes(RelOptInfo *rel) { ListCell *cell = NULL; if (rel == NULL) return; if (list_length(rel->indexlist) == 0 || list_length(hypoHiddenIndexes) == 0) return; foreach(cell, hypoHiddenIndexes) { Oid oid = lfirst_oid(cell); ListCell *lc = NULL; #if PG_VERSION_NUM >= 130000 foreach(lc, rel->indexlist) { IndexOptInfo *index = (IndexOptInfo *) lfirst(lc); if (index->indexoid == oid) rel->indexlist = foreach_delete_current(rel->indexlist, lc); } #else ListCell *next; ListCell *prev = NULL; for (lc = list_head(rel->indexlist); lc != NULL; lc = next) { IndexOptInfo *index = (IndexOptInfo *) lfirst(lc); next = lnext(lc); if (index->indexoid == oid) rel->indexlist = list_delete_cell(rel->indexlist, lc, prev); else prev = lc; } #endif } } /* Simple function to set the indexname, dealing with max name length, and the * ending \0 */ static void hypo_set_indexname(hypoIndex * entry, char *indexname) { char oid[12]; /* store , oid shouldn't be more than * 9999999999 */ int totalsize; snprintf(oid, sizeof(oid), "<%d>", entry->oid); /* we'll prefix the given indexname with the oid, and reserve a final \0 */ totalsize = strlen(oid) + strlen(indexname) + 1; /* final index name must not exceed NAMEDATALEN */ if (totalsize > NAMEDATALEN) totalsize = NAMEDATALEN; /* eventually truncate the given indexname at NAMEDATALEN-1 if needed */ strcpy(entry->indexname, oid); strncat(entry->indexname, indexname, totalsize - strlen(oid) - 1); } /* * Fill the pages and tuples information for a given hypoIndex. */ static void hypo_estimate_index_simple(hypoIndex * entry, BlockNumber *pages, double *tuples) { RelOptInfo *rel; Relation relation; /* * retrieve number of tuples and pages of the related relation, adapted * from plancat.c/get_relation_info(). */ rel = makeNode(RelOptInfo); /* Open the hypo index' relation */ relation = table_open(entry->relid, AccessShareLock); if (!RelationNeedsWAL(relation) && RecoveryInProgress()) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg("hypopg: cannot access temporary or unlogged relations during recovery"))); rel->min_attr = FirstLowInvalidHeapAttributeNumber + 1; rel->max_attr = RelationGetNumberOfAttributes(relation); rel->reltablespace = RelationGetForm(relation)->reltablespace; Assert(rel->max_attr >= rel->min_attr); rel->attr_needed = (Relids *) palloc0((rel->max_attr - rel->min_attr + 1) * sizeof(Relids)); rel->attr_widths = (int32 *) palloc0((rel->max_attr - rel->min_attr + 1) * sizeof(int32)); estimate_rel_size(relation, rel->attr_widths - rel->min_attr, &rel->pages, &rel->tuples, &rel->allvisfrac); /* Close the relation and release the lock now */ table_close(relation, AccessShareLock); hypo_estimate_index(entry, rel); *pages = entry->pages; *tuples = entry->tuples; } /* * Fill the pages and tuples information for a given hypoIndex and a given * RelOptInfo */ static void hypo_estimate_index(hypoIndex * entry, RelOptInfo *rel) { int i, ind_avg_width = 0; int usable_page_size; int line_size; double bloat_factor; int fillfactor = 0; /* for B-tree, hash, GiST and SP-Gist */ #if PG_VERSION_NUM >= 90500 int pages_per_range = BRIN_DEFAULT_PAGES_PER_RANGE; #endif #if PG_VERSION_NUM >= 90600 int bloomLength = 5; #endif int additional_bloat = 20; ListCell *lc; for (i = 0; i < entry->ncolumns; i++) ind_avg_width += hypo_estimate_index_colsize(entry, i); if (entry->indpred == NIL) { /* No predicate, as much tuples as estmated on its relation */ entry->tuples = rel->tuples; } else { /* * We have a predicate. Find it's selectivity and setup the estimated * number of line according to it */ Selectivity selectivity; PlannerInfo *root; PlannerGlobal *glob; Query *parse; List *rtable = NIL; RangeTblEntry *rte; /* create a fake minimal PlannerInfo */ root = makeNode(PlannerInfo); glob = makeNode(PlannerGlobal); glob->boundParams = NULL; root->glob = glob; /* only 1 table: the one related to this hypothetical index */ rte = makeNode(RangeTblEntry); rte->relkind = RTE_RELATION; rte->relid = entry->relid; rte->inh = false; /* don't include inherited children */ rtable = lappend(rtable, rte); parse = makeNode(Query); parse->rtable = rtable; root->parse = parse; /* * allocate simple_rel_arrays and simple_rte_arrays. This function * will also setup simple_rte_arrays with the previous rte. */ setup_simple_rel_arrays(root); /* also add our table info */ root->simple_rel_array[1] = rel; /* * per comment on clause_selectivity(), JOIN_INNER must be passed if * the clause isn't a join clause, which is our case, and passing 0 to * varRelid is appropriate for restriction clause. */ selectivity = clauselist_selectivity(root, entry->indpred, 0, JOIN_INNER, NULL); elog(DEBUG1, "hypopg: selectivity for index \"%s\": %lf", entry->indexname, selectivity); entry->tuples = selectivity * rel->tuples; } /* handle index storage parameters */ foreach(lc, entry->options) { DefElem *elem = (DefElem *) lfirst(lc); if (strcmp(elem->defname, "fillfactor") == 0) fillfactor = (int32) intVal(elem->arg); #if PG_VERSION_NUM >= 90500 if (strcmp(elem->defname, "pages_per_range") == 0) pages_per_range = (int32) intVal(elem->arg); #endif #if PG_VERSION_NUM >= 90600 if (strcmp(elem->defname, "length") == 0) bloomLength = (int32) intVal(elem->arg); #endif } if (entry->relam == BTREE_AM_OID) { /* ------------------------------- * quick estimating of index size: * * sizeof(PageHeader) : 24 (1 per page) * sizeof(BTPageOpaqueData): 16 (1 per page) * sizeof(IndexTupleData): 8 (1 per tuple, referencing heap) * sizeof(ItemIdData): 4 (1 per tuple, storing the index item) * default fillfactor: 90% * no NULL handling * fixed additional bloat: 20% * * I'll also need to read more carefully nbtree code to check if * this is accurate enough. * */ line_size = ind_avg_width + +(sizeof(IndexTupleData) * entry->ncolumns) + MAXALIGN(sizeof(ItemIdData) * entry->ncolumns); usable_page_size = BLCKSZ - SizeOfPageHeaderData - sizeof(BTPageOpaqueData); bloat_factor = (200.0 - (fillfactor == 0 ? BTREE_DEFAULT_FILLFACTOR : fillfactor) + additional_bloat) / 100; entry->pages = (BlockNumber) (entry->tuples * line_size * bloat_factor / usable_page_size); #if PG_VERSION_NUM >= 90300 entry->tree_height = -1; /* TODO */ #endif } #if PG_VERSION_NUM >= 90500 else if (entry->relam == BRIN_AM_OID) { HeapTuple ht_opc; Form_pg_opclass opcrec; char *opcname; int ranges = rel->pages / pages_per_range + 1; bool is_minmax = true; int data_size; /* ------------------------------- * quick estimation of index size. A BRIN index contains * - a root page * - a range map: REVMAP_PAGE_MAXITEMS items (one per range * block) per revmap block * - regular type: sizeof(BrinTuple) per range, plus depending * on opclass: * - *_minmax_ops: 2 Datums (min & max obviously) * - *_inclusion_ops: 3 datumes (inclusion and 2 bool) * * I assume same minmax VS. inclusion opclass for all columns. * BRIN access method does not bloat, don't add any additional. */ entry->pages = 1 /* root page */ + (ranges / REVMAP_PAGE_MAXITEMS) + 1; /* revmap */ /* get the operator class name */ ht_opc = SearchSysCache1(CLAOID, ObjectIdGetDatum(entry->opclass[0])); if (!HeapTupleIsValid(ht_opc)) elog(ERROR, "hypopg: cache lookup failed for opclass %u", entry->opclass[0]); opcrec = (Form_pg_opclass) GETSTRUCT(ht_opc); opcname = NameStr(opcrec->opcname); ReleaseSysCache(ht_opc); /* is it a minmax or an inclusion operator class ? */ if (!strstr(opcname, "minmax_ops")) is_minmax = false; /* compute data_size according to opclass kind */ if (is_minmax) data_size = sizeof(BrinTuple) + 2 * ind_avg_width; else data_size = sizeof(BrinTuple) + ind_avg_width + 2 * sizeof(bool); data_size = data_size * ranges / (BLCKSZ - MAXALIGN(SizeOfPageHeaderData)) + 1; entry->pages += data_size; } #endif #if PG_VERSION_NUM >= 90600 else if (entry->relam == BLOOM_AM_OID) { /* ---------------------------- * bloom indexes are fixed size, depending on bloomLength (default 5B), * see blutils.c * * A bloom index contains a meta page. * Each other pages contains: * - page header * - opaque data * - lines: * - ItemPointerData (BLOOMTUPLEHDRSZ) * - SignType * bloomLength * */ usable_page_size = BLCKSZ - MAXALIGN(SizeOfPageHeaderData) - MAXALIGN(sizeof_BloomPageOpaqueData); line_size = BLOOMTUPLEHDRSZ + sizeof_SignType * bloomLength; entry->pages = 1; /* meta page */ entry->pages += (BlockNumber) ceil( ((double) entry->tuples * line_size) / usable_page_size); } #endif #if PG_VERSION_NUM >= 100000 else if (entry->relam == HASH_AM_OID) { /* ---------------------------- * From hash AM readme (src/backend/access/hash/README): * * There are four kinds of pages in a hash index: the meta page (page * zero), which contains statically allocated control information; * primary bucket pages; overflow pages; and bitmap pages, which keep * track of overflow pages that have been freed and are available for * re-use. For addressing purposes, bitmap pages are regarded as a * subset of the overflow pages. * [...] * A hash index consists of two or more "buckets", into which tuples * are placed whenever their hash key maps to the bucket number. * [...] * Each bucket in the hash index comprises one or more index pages. * The bucket's first page is permanently assigned to it when the * bucket is created. Additional pages, called "overflow pages", are * added if the bucket receives too many tuples to fit in the primary * bucket page. * * Hash AM also already provides some functions to compute an initial * number of buckets given the estimated number of tuples the index * will contains, which is a good enough estimate for hypothetical * index. * * The code below is simply an adaptation of original code to compute * the initial number of bucket, modified to cope with hypothetical * index, plus some naive estimates for the overflow and bitmap pages. * * For more details, refer to the original code, in: * - _hash_init() * - _hash_init_metabuffer() */ int32 data_width; int32 item_width; int32 ffactor; double dnumbuckets; uint32 num_buckets; uint32 num_overflow; uint32 num_bitmap; uint32 lshift; /* * Determine the target fill factor (in tuples per bucket) for this index. * The idea is to make the fill factor correspond to pages about as full * as the user-settable fillfactor parameter says. We can compute it * exactly since the index datatype (i.e. uint32 hash key) is fixed-width. */ data_width = sizeof(uint32); item_width = MAXALIGN(sizeof(IndexTupleData)) + MAXALIGN(data_width) + sizeof(ItemIdData); /* include the line pointer */ ffactor = HypoHashGetTargetPageUsage(fillfactor) / item_width; /* keep to a sane range */ if (ffactor < 10) ffactor = 10; /* * Choose the number of initial bucket pages to match the fill factor * given the estimated number of tuples. We round up the result to the * total number of buckets which has to be allocated before using its * hashm_spares element. However always force at least 2 bucket pages. The * upper limit is determined by considerations explained in * _hash_expandtable(). */ dnumbuckets = entry->tuples / ffactor; if (dnumbuckets <= 2.0) num_buckets = 2; else if (dnumbuckets >= (double) 0x40000000) num_buckets = 0x40000000; else num_buckets = _hash_get_totalbuckets(_hash_spareindex(dnumbuckets)); /* * Naive estimate of overflow pages, knowing that a page can store ffactor * tuples: we compute the number of tuples that wouldn't fit in the * previously computed number of buckets, and compute the number of pages * needed to store them. */ num_overflow = Max(0, ((entry->tuples - (num_buckets * ffactor)) / ffactor) + 1); /* find largest bitmap array size that will fit in page size */ #if PG_VERSION_NUM >= 120000 lshift = pg_leftmost_one_pos32(HypoHashGetMaxBitmapSize()); #else for (lshift = _hash_log2(HypoHashGetMaxBitmapSize()); lshift > 0; --lshift) { if ((1 << lshift) <= HypoHashGetMaxBitmapSize()) break; } #endif /* * Naive estimate of bitmap pages, using the previously computed number of * overflow pages. */ num_bitmap = Max(1, num_overflow / (1 <pages = num_buckets + num_overflow + num_bitmap + 1; } #endif else { /* we shouldn't raise this error */ elog(WARNING, "hypopg: access method %d is not supported", entry->relam); } /* make sure the index size is at least one block */ if (entry->pages <= 0) entry->pages = 1; } /* * Estimate a single index's column of an hypothetical index. */ static int hypo_estimate_index_colsize(hypoIndex * entry, int col) { int i, pos; Node *expr; /* If simple attribute, return avg width */ if (entry->indexkeys[col] != 0) return get_attavgwidth(entry->relid, entry->indexkeys[col]); /* It's an expression */ pos = 0; for (i = 0; i < col; i++) { /* get the position in the expression list */ if (entry->indexkeys[i] == 0) pos++; } expr = (Node *) list_nth(entry->indexprs, pos); if (IsA(expr, Var) &&((Var *) expr)->varattno != InvalidAttrNumber) return get_attavgwidth(entry->relid, ((Var *) expr)->varattno); if (IsA(expr, FuncExpr)) { FuncExpr *funcexpr = (FuncExpr *) expr; switch (funcexpr->funcid) { case 2311: /* md5 */ return 32; break; case 870: case 871: { /* lower and upper, detect if simple attr */ Var *var; if (IsA(linitial(funcexpr->args), Var)) { var = (Var *) linitial(funcexpr->args); if (var->varattno > 0) return get_attavgwidth(entry->relid, var->varattno); } break; } default: /* default fallback estimate will be used */ break; } } return 50; /* default fallback estimate */ } /* * canreturn should been checked with the amcanreturn proc, but this * can't be done without a real Relation, so try to find it out */ static bool hypo_can_return(hypoIndex * entry, Oid atttype, int i, char *amname) { /* no amcanreturn entry, am does not handle IOS */ #if PG_VERSION_NUM >= 90600 if (entry->amcanreturn == NULL) return false; #else if (!RegProcedureIsValid(entry->amcanreturn)) return false; #endif switch (entry->relam) { case BTREE_AM_OID: /* btree always support Index-Only scan */ return true; break; case GIST_AM_OID: #if PG_VERSION_NUM >= 90500 { HeapTuple tuple; /* * since 9.5, GiST can do IOS if the opclass define a * GIST_FETCH_PROC support function. */ tuple = SearchSysCache4(AMPROCNUM, ObjectIdGetDatum(entry->opfamily[i]), ObjectIdGetDatum(entry->opcintype[i]), ObjectIdGetDatum(entry->opcintype[i]), Int8GetDatum(GIST_FETCH_PROC)); if (!HeapTupleIsValid(tuple)) return false; ReleaseSysCache(tuple); return true; } #else return false; #endif break; case SPGIST_AM_OID: { SpGistCache *cache; spgConfigIn in; HeapTuple tuple; Oid funcid; bool res = false; /* support function 1 tells us if IOS is supported */ tuple = SearchSysCache4(AMPROCNUM, ObjectIdGetDatum(entry->opfamily[i]), ObjectIdGetDatum(entry->opcintype[i]), ObjectIdGetDatum(entry->opcintype[i]), Int8GetDatum(SPGIST_CONFIG_PROC)); /* just in case */ if (!HeapTupleIsValid(tuple)) return false; funcid = ((Form_pg_amproc) GETSTRUCT(tuple))->amproc; ReleaseSysCache(tuple); in.attType = atttype; cache = palloc0(sizeof(SpGistCache)); OidFunctionCall2Coll(funcid, entry->indexcollations[i], PointerGetDatum(&in), PointerGetDatum(&cache->config)); res = cache->config.canReturnData; pfree(cache); return res; } break; default: /* all specific case should have been handled */ elog(WARNING, "hypopg: access method \"%s\" looks like it may" " support Index-Only Scan, but it's unexpected.\n" "Feel free to warn developper.", amname); return false; break; } } /* * Given an access method name and its oid, try to find out if it's a supported * pluggable access method. If so, save its oid for future use. */ static void hypo_discover_am(char *amname, Oid oid) { #if PG_VERSION_NUM < 90600 /* no (reliable) external am before 9.6 */ return; #else /* don't try to handle builtin access method */ if (oid == BTREE_AM_OID || oid == GIST_AM_OID || oid == GIN_AM_OID || oid == SPGIST_AM_OID || oid == BRIN_AM_OID || oid == HASH_AM_OID) return; /* Is it the bloom access method? */ if (strcmp(amname, "bloom") == 0) BLOOM_AM_OID = oid; #endif } hypopg-1.4.0/import/000077500000000000000000000000001443433066400143435ustar00rootroot00000000000000hypopg-1.4.0/import/hypopg_import.c000066400000000000000000000036531443433066400174160ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_import.c: Import of some PostgreSQL private fuctions. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (c) 2008-2023, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ #include "postgres.h" #if PG_VERSION_NUM >= 90300 #include "access/htup_details.h" #endif #include "catalog/namespace.h" #include "catalog/pg_opclass.h" #include "commands/defrem.h" #include "utils/builtins.h" #include "utils/lsyscache.h" #include "utils/syscache.h" #include "include/hypopg_import.h" /* * Copied from src/backend/utils/adt/ruleutils.c, not exported. * * get_opclass_name - fetch name of an index operator class * * The opclass name is appended (after a space) to buf. * * Output is suppressed if the opclass is the default for the given * actual_datatype. (If you don't want this behavior, just pass * InvalidOid for actual_datatype.) */ void get_opclass_name(Oid opclass, Oid actual_datatype, StringInfo buf) { HeapTuple ht_opc; Form_pg_opclass opcrec; char *opcname; char *nspname; ht_opc = SearchSysCache1(CLAOID, ObjectIdGetDatum(opclass)); if (!HeapTupleIsValid(ht_opc)) elog(ERROR, "cache lookup failed for opclass %u", opclass); opcrec = (Form_pg_opclass) GETSTRUCT(ht_opc); if (!OidIsValid(actual_datatype) || GetDefaultOpClass(actual_datatype, opcrec->opcmethod) != opclass) { /* Okay, we need the opclass name. Do we need to qualify it? */ opcname = NameStr(opcrec->opcname); if (OpclassIsVisible(opclass)) appendStringInfo(buf, " %s", quote_identifier(opcname)); else { nspname = get_namespace_name(opcrec->opcnamespace); appendStringInfo(buf, " %s.%s", quote_identifier(nspname), quote_identifier(opcname)); } } ReleaseSysCache(ht_opc); } hypopg-1.4.0/import/hypopg_import_index.c000066400000000000000000000214371443433066400206050ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_import_index.c: Import of some PostgreSQL private fuctions, used for * hypothetical index. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (c) 2008-2023, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ #include "postgres.h" #if PG_VERSION_NUM >= 90300 #include "access/htup_details.h" #endif #include "catalog/heap.h" #include "catalog/namespace.h" #include "catalog/pg_opclass.h" #include "commands/defrem.h" #include "commands/vacuum.h" #include "nodes/makefuncs.h" #include "nodes/pg_list.h" #include "optimizer/clauses.h" #if PG_VERSION_NUM >= 120000 #include "optimizer/optimizer.h" #endif #include "optimizer/planner.h" #include "optimizer/pathnode.h" #if PG_VERSION_NUM >= 110000 #include "partitioning/partbounds.h" #endif #include "parser/parse_coerce.h" #include "utils/builtins.h" #include "utils/rel.h" #include "utils/syscache.h" #include "include/hypopg.h" /* Copied from src/backend/optimizer/util/plancat.c, not exported. * * Build a targetlist representing the columns of the specified index. * Each column is represented by a Var for the corresponding base-relation * column, or an expression in base-relation Vars, as appropriate. * * There are never any dropped columns in indexes, so unlike * build_physical_tlist, we need no failure case. */ List * build_index_tlist(PlannerInfo *root, IndexOptInfo *index, Relation heapRelation) { List *tlist = NIL; Index varno = index->rel->relid; ListCell *indexpr_item; int i; indexpr_item = list_head(index->indexprs); for (i = 0; i < index->ncolumns; i++) { int indexkey = index->indexkeys[i]; Expr *indexvar; if (indexkey != 0) { /* simple column */ const FormData_pg_attribute *att_tup; if (indexkey < 0) att_tup = SystemAttributeDefinition(indexkey #if PG_VERSION_NUM < 120000 , heapRelation->rd_rel->relhasoids #endif ); else #if PG_VERSION_NUM >= 110000 att_tup = TupleDescAttr(heapRelation->rd_att, indexkey - 1); #else att_tup = heapRelation->rd_att->attrs[indexkey - 1]; #endif indexvar = (Expr *) makeVar(varno, indexkey, att_tup->atttypid, att_tup->atttypmod, att_tup->attcollation, 0); } else { /* expression column */ if (indexpr_item == NULL) elog(ERROR, "wrong number of index expressions"); indexvar = (Expr *) lfirst(indexpr_item); indexpr_item = lnext(index->indexprs, indexpr_item); } tlist = lappend(tlist, makeTargetEntry(indexvar, i + 1, NULL, false)); } if (indexpr_item != NULL) elog(ERROR, "wrong number of index expressions"); return tlist; } #if PG_VERSION_NUM < 100000 /* * Copied from src/backend/commands/indexcmds.c, not exported. * Resolve possibly-defaulted operator class specification */ Oid GetIndexOpClass(List *opclass, Oid attrType, char *accessMethodName, Oid accessMethodId) { char *schemaname; char *opcname; HeapTuple tuple; Oid opClassId, opInputType; /* * Release 7.0 removed network_ops, timespan_ops, and datetime_ops, so we * ignore those opclass names so the default *_ops is used. This can be * removed in some later release. bjm 2000/02/07 * * Release 7.1 removes lztext_ops, so suppress that too for a while. tgl * 2000/07/30 * * Release 7.2 renames timestamp_ops to timestamptz_ops, so suppress that * too for awhile. I'm starting to think we need a better approach. tgl * 2000/10/01 * * Release 8.0 removes bigbox_ops (which was dead code for a long while * anyway). tgl 2003/11/11 */ if (list_length(opclass) == 1) { char *claname = strVal(linitial(opclass)); if (strcmp(claname, "network_ops") == 0 || strcmp(claname, "timespan_ops") == 0 || strcmp(claname, "datetime_ops") == 0 || strcmp(claname, "lztext_ops") == 0 || strcmp(claname, "timestamp_ops") == 0 || strcmp(claname, "bigbox_ops") == 0) opclass = NIL; } if (opclass == NIL) { /* no operator class specified, so find the default */ opClassId = GetDefaultOpClass(attrType, accessMethodId); if (!OidIsValid(opClassId)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_OBJECT), errmsg("data type %s has no default operator class for access method \"%s\"", format_type_be(attrType), accessMethodName), errhint("You must specify an operator class for the index or define a default operator class for the data type."))); return opClassId; } /* * Specific opclass name given, so look up the opclass. */ /* deconstruct the name list */ DeconstructQualifiedName(opclass, &schemaname, &opcname); if (schemaname) { /* Look in specific schema only */ Oid namespaceId; #if PG_VERSION_NUM >= 90300 namespaceId = LookupExplicitNamespace(schemaname, false); #else namespaceId = LookupExplicitNamespace(schemaname); #endif tuple = SearchSysCache3(CLAAMNAMENSP, ObjectIdGetDatum(accessMethodId), PointerGetDatum(opcname), ObjectIdGetDatum(namespaceId)); } else { /* Unqualified opclass name, so search the search path */ opClassId = OpclassnameGetOpcid(accessMethodId, opcname); if (!OidIsValid(opClassId)) ereport(ERROR, (errcode(ERRCODE_UNDEFINED_OBJECT), errmsg("operator class \"%s\" does not exist for access method \"%s\"", opcname, accessMethodName))); tuple = SearchSysCache1(CLAOID, ObjectIdGetDatum(opClassId)); } if (!HeapTupleIsValid(tuple)) { ereport(ERROR, (errcode(ERRCODE_UNDEFINED_OBJECT), errmsg("operator class \"%s\" does not exist for access method \"%s\"", NameListToString(opclass), accessMethodName))); } /* * Verify that the index operator class accepts this datatype. Note we * will accept binary compatibility. */ opClassId = HeapTupleGetOid(tuple); opInputType = ((Form_pg_opclass) GETSTRUCT(tuple))->opcintype; if (!IsBinaryCoercible(attrType, opInputType)) ereport(ERROR, (errcode(ERRCODE_DATATYPE_MISMATCH), errmsg("operator class \"%s\" does not accept data type %s", NameListToString(opclass), format_type_be(attrType)))); ReleaseSysCache(tuple); return opClassId; } #endif /* * Copied from src/backend/commands/indexcmds.c, not exported. * CheckPredicate * Checks that the given partial-index predicate is valid. * * This used to also constrain the form of the predicate to forms that * indxpath.c could do something with. However, that seems overly * restrictive. One useful application of partial indexes is to apply * a UNIQUE constraint across a subset of a table, and in that scenario * any evaluatable predicate will work. So accept any predicate here * (except ones requiring a plan), and let indxpath.c fend for itself. */ void CheckPredicate(Expr *predicate) { /* * transformExpr() should have already rejected subqueries, aggregates, * and window functions, based on the EXPR_KIND_ for a predicate. */ /* * A predicate using mutable functions is probably wrong, for the same * reasons that we don't allow an index expression to use one. */ if (CheckMutability(predicate)) ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), errmsg("functions in index predicate must be marked IMMUTABLE"))); } /* * Copied from src/backend/commands/indexcmds.c, not exported. * CheckMutability * Test whether given expression is mutable */ bool CheckMutability(Expr *expr) { /* * First run the expression through the planner. This has a couple of * important consequences. First, function default arguments will get * inserted, which may affect volatility (consider "default now()"). * Second, inline-able functions will get inlined, which may allow us to * conclude that the function is really less volatile than it's marked. As * an example, polymorphic functions must be marked with the most volatile * behavior that they have for any input type, but once we inline the * function we may be able to conclude that it's not so volatile for the * particular input type we're dealing with. * * We assume here that expression_planner() won't scribble on its input. */ expr = expression_planner(expr); /* Now we can search for non-immutable functions */ return contain_mutable_functions((Node *) expr); } #if PG_VERSION_NUM < 90500 /* * Copied from src/backend/commands/amcmds.c * * get_am_name - given an access method OID name and type, look up its name. */ char * get_am_name(Oid amOid) { HeapTuple tup; char *result = NULL; tup = SearchSysCache1(AMOID, ObjectIdGetDatum(amOid)); if (HeapTupleIsValid(tup)) { Form_pg_am amform = (Form_pg_am) GETSTRUCT(tup); result = pstrdup(NameStr(amform->amname)); ReleaseSysCache(tup); } return result; } #endif hypopg-1.4.0/include/000077500000000000000000000000001443433066400144545ustar00rootroot00000000000000hypopg-1.4.0/include/hypopg.h000066400000000000000000000027671443433066400161470ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg.h: Implementation of hypothetical indexes for PostgreSQL * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (C) 2015-2023: Julien Rouhaud * *------------------------------------------------------------------------- */ #ifndef _HYPOPG_H_ #define _HYPOPG_H_ #if PG_VERSION_NUM >= 120000 #include "access/table.h" #endif #include "catalog/catalog.h" #include "commands/explain.h" #include "nodes/nodeFuncs.h" #include "utils/memutils.h" #include "include/hypopg_import.h" /* Provide backward compatibility macros for table.c API on pre v12 versions */ #if PG_VERSION_NUM < 120000 #define table_open(r, l) heap_open(r, l) #define table_close(r, l) heap_close(r, l) #endif /* * Hacky macro to provide backward compatibility with either 1 or 2 arg lnext() * on pre v13 versions */ #if PG_VERSION_NUM < 130000 #define LNEXT(_1, _2, NAME, ...) NAME #undef lnext #define lnext(...) LNEXT(__VA_ARGS__, LNEXT2, LNEXT1) (__VA_ARGS__) #define LNEXT1(lc) ((lc)->next) #define LNEXT2(list, lc) ((lc)->next) #endif /* Backport of atooid macro */ #if PG_VERSION_NUM < 100000 #define atooid(x) ((Oid) strtoul((x), NULL, 10)) #endif extern bool isExplain; /* GUC for enabling / disabling hypopg during EXPLAIN */ extern bool hypo_is_enabled; extern MemoryContext HypoMemoryContext; Oid hypo_getNewOid(Oid relid); void hypo_reset_fake_oids(void); #endif hypopg-1.4.0/include/hypopg_import.h000066400000000000000000000013651443433066400175320ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_import.h: Import of some PostgreSQL private fuctions. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (c) 2008-2023, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ #ifndef _HYPOPG_IMPORT_H_ #define _HYPOPG_IMPORT_H_ #include "commands/vacuum.h" #include "lib/stringinfo.h" #include "nodes/pg_list.h" #include "optimizer/planner.h" #include "utils/rel.h" #include "include/hypopg_import_index.h" extern void get_opclass_name(Oid opclass, Oid actual_datatype, StringInfo buf); #endif /* _HYPOPG_IMPORT_H_ */ hypopg-1.4.0/include/hypopg_import_index.h000066400000000000000000000033471443433066400207230ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_import_index.h: Import of some PostgreSQL private fuctions, used for * hypothetical index. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (c) 2008-2023, PostgreSQL Global Development Group * *------------------------------------------------------------------------- */ #ifndef _HYPOPG_IMPORT_INDEX_H_ #define _HYPOPG_IMPORT_INDEX_H_ /* adapted from nbtinsert.h */ #define HYPO_BTMaxItemSize \ MAXALIGN_DOWN((BLCKSZ - \ MAXALIGN(SizeOfPageHeaderData + 3*sizeof(ItemIdData)) - \ MAXALIGN(sizeof(BTPageOpaqueData))) / 3) #if PG_VERSION_NUM >= 100000 #include "access/hash.h" /* adapted from src/include/access/hash.h */ #define HypoHashGetFillFactor(ffactor) \ (((fillfactor) == 0) ? HASH_DEFAULT_FILLFACTOR : (ffactor)) #define HypoHashGetTargetPageUsage(ffactor) \ (BLCKSZ * HypoHashGetFillFactor(ffactor) / 100) #define HypoHashGetMaxBitmapSize() \ (BLCKSZ - \ (MAXALIGN(SizeOfPageHeaderData) + MAXALIGN(sizeof(HashPageOpaqueData)))) #define HypoHashMaxItemSize() \ MAXALIGN_DOWN(BLCKSZ - \ SizeOfPageHeaderData - \ sizeof(ItemIdData) - \ MAXALIGN(sizeof(HashPageOpaqueData))) #endif extern List *build_index_tlist(PlannerInfo *root, IndexOptInfo *index, Relation heapRelation); #if PG_VERSION_NUM < 100000 extern Oid GetIndexOpClass(List *opclass, Oid attrType, char *accessMethodName, Oid accessMethodId); #endif extern void CheckPredicate(Expr *predicate); extern bool CheckMutability(Expr *expr); #if PG_VERSION_NUM < 90500 extern char *get_am_name(Oid amOid); #endif #endif /* _HYPOPG_IMPORT_INDEX_H_ */ hypopg-1.4.0/include/hypopg_index.h000066400000000000000000000125551443433066400173320ustar00rootroot00000000000000/*------------------------------------------------------------------------- * * hypopg_index.h: Implementation of hypothetical indexes for PostgreSQL * * This file contains all includes for the internal code related to * hypothetical indexes support. * * This program is open source, licensed under the PostgreSQL license. * For license terms, see the LICENSE file. * * Copyright (C) 2015-2023: Julien Rouhaud * *------------------------------------------------------------------------- */ #ifndef _HYPOPG_INDEX_H_ #define _HYPOPG_INDEX_H_ #if PG_VERSION_NUM >= 90600 #include "access/amapi.h" #endif #include "optimizer/plancat.h" #include "tcop/utility.h" #define HYPO_INDEX_NB_COLS 12 /* # of column hypopg() returns */ #define HYPO_INDEX_CREATE_COLS 2 /* # of column hypopg_create_index() * returns */ #define HYPO_HIDDEN_INDEX_COLS 1 /* # of column hypopg_hidden_indexes() * returns */ #if PG_VERSION_NUM >= 90600 /* hardcode some bloom values, bloom.h is not exported */ #define sizeof_BloomPageOpaqueData 8 #define sizeof_SignType 2 #define BLOOMTUPLEHDRSZ 6 #endif /*--- Structs --- */ /*-------------------------------------------------------- * Hypothetical index storage, pretty much an IndexOptInfo * Some dynamic informations such as pages and lines are not stored but * computed when the hypothetical index is used. */ typedef struct hypoIndex { Oid oid; /* hypothetical index unique identifier */ Oid relid; /* related relation Oid */ Oid reltablespace; /* tablespace of the index, if set */ char *indexname; /* hypothetical index name */ BlockNumber pages; /* number of estimated disk pages for the * index */ double tuples; /* number of estimated tuples in the index */ #if PG_VERSION_NUM >= 90300 int tree_height; /* estimated index tree height, -1 if unknown */ #endif /* index descriptor informations */ int ncolumns; /* number of columns, only 1 for now */ int nkeycolumns; /* number of key columns */ short int *indexkeys; /* attnums */ Oid *indexcollations; /* OIDs of collations of index columns */ Oid *opfamily; /* OIDs of operator families for columns */ Oid *opclass; /* OIDs of opclass data types */ Oid *opcintype; /* OIDs of opclass declared input data types */ Oid *sortopfamily; /* OIDs of btree opfamilies, if orderable */ bool *reverse_sort; /* is sort order descending? */ bool *nulls_first; /* do NULLs come first in the sort order? */ Oid relam; /* OID of the access method (in pg_am) */ #if PG_VERSION_NUM >= 90600 amcostestimate_function amcostestimate; amcanreturn_function amcanreturn; #else RegProcedure amcostestimate; /* OID of the access method's cost fcn */ RegProcedure amcanreturn; /* OID of the access method's canreturn fcn */ #endif List *indexprs; /* expressions for non-simple index columns */ List *indpred; /* predicate if a partial index, else NIL */ bool predOK; /* true if predicate matches query */ bool unique; /* true if a unique index */ bool immediate; /* is uniqueness enforced immediately? */ #if PG_VERSION_NUM >= 90500 bool *canreturn; /* which index cols can be returned in an * index-only scan? */ #else bool canreturn; /* can index return IndexTuples? */ #endif bool amcanorderbyop; /* does AM support order by operator result? */ bool amoptionalkey; /* can query omit key for the first column? */ bool amsearcharray; /* can AM handle ScalarArrayOpExpr quals? */ bool amsearchnulls; /* can AM search for NULL/NOT NULL entries? */ bool amhasgettuple; /* does AM have amgettuple interface? */ bool amhasgetbitmap; /* does AM have amgetbitmap interface? */ #if PG_VERSION_NUM >= 110000 bool amcanparallel; /* does AM support parallel scan? */ bool amcaninclude; /* does AM support columns included with clause INCLUDE? */ #endif bool amcanunique; /* does AM support UNIQUE indexes? */ bool amcanmulticol; /* does AM support multi-column indexes? */ /* store some informations usually saved in catalogs */ List *options; /* WITH clause options: a list of DefElem */ bool amcanorder; /* does AM support order by column value? */ } hypoIndex; /* List of hypothetic indexes for current backend */ extern List *hypoIndexes; /* List of hypothetical hidden existing indexes for current backend */ extern List *hypoHiddenIndexes; /*--- Functions --- */ void hypo_index_reset(void); PGDLLEXPORT Datum hypopg(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_create_index(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_drop_index(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_relation_size(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_get_indexdef(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_reset_index(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_hide_index(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_unhide_index(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_unhide_all_indexes(PG_FUNCTION_ARGS); PGDLLEXPORT Datum hypopg_hidden_indexes(PG_FUNCTION_ARGS); extern explain_get_index_name_hook_type prev_explain_get_index_name_hook; hypoIndex *hypo_get_index(Oid indexId); const char *hypo_explain_get_index_name_hook(Oid indexId); void hypo_injectHypotheticalIndex(PlannerInfo *root, Oid relationObjectId, bool inhparent, RelOptInfo *rel, Relation relation, hypoIndex * entry); void hypo_hideIndexes(RelOptInfo *rel); #endif hypopg-1.4.0/test/000077500000000000000000000000001443433066400140105ustar00rootroot00000000000000hypopg-1.4.0/test/sql/000077500000000000000000000000001443433066400146075ustar00rootroot00000000000000hypopg-1.4.0/test/sql/hypo_brin.sql000066400000000000000000000007141443433066400173230ustar00rootroot00000000000000-- Hypothetical BRIN index tests CREATE TABLE hypo_brin (id integer); INSERT INTO hypo_brin SELECT generate_series(1, 10000); ANALYZE hypo_brin; SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo_brin USING brin (id);'); -- Should use hypothetical index SET enable_seqscan = 0; SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo_brin WHERE id = 1') e WHERE e ~ 'Bitmap Index Scan.*<\d+>brin_hypo_brin.*'; DROP TABLE hypo_brin; hypopg-1.4.0/test/sql/hypo_hash.sql000066400000000000000000000007301443433066400173120ustar00rootroot00000000000000-- hypothetical hash indexes, pg10+ -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); -- Create normal index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo USING hash (id)'); -- Should use hypothetical index using a regular Index Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Scan.*<\d+>hash_hypo.*'; -- Deparse the index DDL SELECT hypopg_get_indexdef(indexrelid) FROM hypopg(); hypopg-1.4.0/test/sql/hypo_hide_index.sql000066400000000000000000000067421443433066400205000ustar00rootroot00000000000000-- Hypothetically hiding existing indexes tests -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); -- The EXPLAIN initial state SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; -- Create real index in hypo and use this index CREATE INDEX hypo_id_idx ON hypo(id); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; -- Should be zero SELECT COUNT(*) FROM hypopg_hidden_indexes(); -- The hypo_id_idx index should not be used SELECT hypopg_hide_index('hypo_id_idx'::regclass); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; -- Should be only one record SELECT COUNT(*) FROM hypopg_hidden_indexes(); SELECT table_name,index_name FROM hypopg_hidden_indexes; -- Create the real index again and -- EXPLAIN should use this index instead of the previous one CREATE index hypo_id_val_idx ON hypo(id, val); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_val_idx'; -- Shouldn't use any index SELECT hypopg_hide_index('hypo_id_val_idx'::regclass); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_val_idx'; -- Should be two records SELECT table_name,index_name FROM hypopg_hidden_indexes; -- Try to add one repeatedly or add another wrong index oid SELECT hypopg_hide_index('hypo_id_idx'::regclass); SELECT hypopg_hide_index('hypo'::regclass); SELECT hypopg_hide_index(0); -- Also of course can be used to hide hypothetical indexes SELECT COUNT(*) FROM hypopg_create_index('create index on hypo(id,val);'); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; SELECT hypopg_hide_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Should be only three records SELECT COUNT(*) FROM hypopg_hidden_indexes; -- Hypothetical indexes should be unhidden when deleting SELECT hypopg_drop_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); -- Should become two records SELECT COUNT(*) FROM hypopg_hidden_indexes; -- Hypopg_reset can also unhidden the hidden indexes -- due to the deletion of hypothetical indexes. SELECT COUNT(*) FROM hypopg_create_index('create index on hypo(id,val);'); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; SELECT hypopg_hide_index((SELECT indexrelid FROM hypopg_list_indexes LIMIT 1)); -- Changed from three records to two records. SELECT COUNT(*) FROM hypopg_hidden_indexes; SELECT hypopg_reset(); SELECT COUNT(*) FROM hypopg_hidden_indexes; -- Unhide an index SELECT hypopg_unhide_index('hypo_id_idx'::regclass); SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'hypo_id_idx'; -- Should become one record SELECT table_name,index_name FROM hypopg_hidden_indexes; -- Try to delete one repeatedly or delete another wrong index oid SELECT hypopg_unhide_index('hypo_id_idx'::regclass); SELECT hypopg_unhide_index('hypo'::regclass); SELECT hypopg_unhide_index(0); -- Should still have one record SELECT table_name,index_name FROM hypopg_hidden_indexes; -- Unhide all indexes SELECT hypopg_unhide_all_indexes(); -- Should change back to the original zero SELECT COUNT(*) FROM hypopg_hidden_indexes(); -- Clean real indexes and hypothetical indexes DROP INDEX hypo_id_idx; DROP INDEX hypo_id_val_idx; SELECT hypopg_reset(); hypopg-1.4.0/test/sql/hypo_include.sql000066400000000000000000000016651443433066400200220ustar00rootroot00000000000000-- hypothetical indexes using INCLUDE keyword, pg11+ -- Remove all the hypothetical indexes if any SELECT hypopg_reset(); -- Make sure stats and visibility map are up to date VACUUM ANALYZE hypo; -- Should not use hypothetical index -- Create normal index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo (id)'); -- Should use hypothetical index using a regular Index Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Scan.*<\d+>btree_hypo.*'; -- Remove all the hypothetical indexes SELECT hypopg_reset(); -- Create INCLUDE index SELECT COUNT(*) AS NB FROM hypopg_create_index('CREATE INDEX ON hypo (id) INCLUDE (val)'); -- Should use hypothetical index using an Index Only Scan SELECT COUNT(*) FROM do_explain('SELECT val FROM hypo WHERE id = 1') e WHERE e ~ 'Index Only Scan.*<\d+>btree_hypo.*'; -- Deparse the index DDL SELECT hypopg_get_indexdef(indexrelid) FROM hypopg(); hypopg-1.4.0/test/sql/hypo_index_part.sql000066400000000000000000000023471443433066400205320ustar00rootroot00000000000000-- Hypothetical on partitioned tables CREATE TABLE hypo_part(id1 integer, id2 integer, id3 integer) PARTITION BY LIST (id1); CREATE TABLE hypo_part_1 PARTITION OF hypo_part FOR VALUES IN (1) PARTITION BY LIST (id2); CREATE TABLE hypo_part_1_1 PARTITION OF hypo_part_1 FOR VALUES IN (1); INSERT INTO hypo_part SELECT 1, 1, generate_series(1, 10000); ANALYZE hypo_part; SET enable_seqscan = 0; -- hypothetical index on root partitioned table should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part (id3)'); SELECT 1, COUNT(*) FROM do_explain('SELECT * FROM hypo_part WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; SELECT hypopg_reset(); -- hypothetical index on non-root partitioned table should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1 (id3)'); SELECT 2, COUNT(*) FROM do_explain('SELECT * FROM hypo_part_1 WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; SELECT hypopg_reset(); -- hypothetical index on partition should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1_1 (id3)'); SELECT 3, COUNT(*) FROM do_explain('SELECT * FROM hypo_part_1_1 WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part.*'; hypopg-1.4.0/test/sql/hypo_index_part_10.sql000066400000000000000000000017041443433066400210260ustar00rootroot00000000000000-- Hypothetical on partitioned tables CREATE TABLE hypo_part(id1 integer, id2 integer, id3 integer) PARTITION BY LIST (id1); CREATE TABLE hypo_part_1 PARTITION OF hypo_part FOR VALUES IN (1) PARTITION BY LIST (id2); CREATE TABLE hypo_part_1_1 PARTITION OF hypo_part_1 FOR VALUES IN (1); INSERT INTO hypo_part SELECT 1, 1, generate_series(1, 10000); ANALYZE hypo_part; -- hypothetical index on root partitioned table should not work SELECT hypopg_create_index('CREATE INDEX ON hypo_part (id1)'); -- hypothetical index on non-root partitioned table should not work SELECT hypopg_create_index('CREATE INDEX ON hypo_part_1 (id1)'); -- hypothetical index on partition should work SELECT COUNT(*) AS nb FROM hypopg_create_index('CREATE INDEX ON hypo_part_1_1 (id3)'); -- Should use hypothetical index SET enable_seqscan = 0; SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo_part WHERE id3 = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo_part_1_1.*'; hypopg-1.4.0/test/sql/hypopg.sql000066400000000000000000000071621443433066400166440ustar00rootroot00000000000000-- SETUP CREATE OR REPLACE FUNCTION do_explain(stmt text) RETURNS table(a text) AS $_$ DECLARE ret text; BEGIN FOR ret IN EXECUTE format('EXPLAIN (FORMAT text) %s', stmt) LOOP a := ret; RETURN next ; END LOOP; END; $_$ LANGUAGE plpgsql; CREATE EXTENSION hypopg; CREATE TABLE hypo (id integer, val text); INSERT INTO hypo SELECT i, 'line ' || i FROM generate_series(1,100000) f(i); ANALYZE hypo; -- TESTS SELECT COUNT(*) AS nb FROM public.hypopg_create_index('SELECT 1;CREATE INDEX ON hypo(id); SELECT 2'); SELECT schema_name, table_name, am_name FROM public.hypopg_list_indexes; -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo ORDER BY id') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Should not use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Add predicate index SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WHERE id < 5'); -- This specific index should be used WITH ind AS ( SELECT indexrelid, row_number() OVER (ORDER BY indexrelid) AS num FROM public.hypopg() ), regexp AS ( SELECT regexp_replace(e, '.*<(\d+)>.*', E'\\1', 'g') AS r FROM do_explain('SELECT * FROM hypo WHERE id < 3') AS e ) SELECT num FROM ind JOIN regexp ON ind.indexrelid::text = regexp.r; -- Specify fillfactor SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WITH (fillfactor = 10)'); -- Specify an incorrect fillfactor SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo(id) WITH (fillfactor = 1)'); -- Index size estimation SELECT hypopg_relation_size(indexrelid) = current_setting('block_size')::bigint AS one_block FROM hypopg() ORDER BY indexrelid; -- Should detect invalid argument SELECT hypopg_relation_size(1); -- locally disable hypoopg SET hypopg.enabled to false; -- no hypothetical index should be used SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- locally re-enable hypoopg SET hypopg.enabled to true; -- hypothetical index should be used SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Remove one hypothetical index SELECT hypopg_drop_index(indexrelid) FROM hypopg() ORDER BY indexrelid LIMIT 1; -- Remove all the hypothetical indexes SELECT hypopg_reset(); -- index on expression SELECT COUNT(*) AS NB FROM public.hypopg_create_index('CREATE INDEX ON hypo (md5(val))'); -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE md5(val) = md5(''line 1'')') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; -- Deparse an index DDL, with almost every possible pathcode SELECT hypopg_get_indexdef(indexrelid) FROM hypopg_create_index('create index on hypo using btree(id desc, id desc nulls first, id desc nulls last, cast(md5(val) as bpchar) bpchar_pattern_ops) with (fillfactor = 10) WHERE id < 1000 AND id +1 %2 = 3'); -- Make sure the old Oid generator still works. Test it while keeping existing -- entries, as both should be able to coexist. SET hypopg.use_real_oids = on; -- Should not use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; SELECT COUNT(*) AS nb FROM public.hypopg_create_index('CREATE INDEX ON hypo(id);'); -- Should use hypothetical index SELECT COUNT(*) FROM do_explain('SELECT * FROM hypo WHERE id = 1') e WHERE e ~ 'Index.*<\d+>btree_hypo.*'; hypopg-1.4.0/typedefs.list000066400000000000000000001512201443433066400155520ustar00rootroot00000000000000ABITVEC ACCESS_ALLOWED_ACE ACL ACL_SIZE_INFORMATION AFFIX ASN1_INTEGER ASN1_OBJECT ASN1_STRING AV A_ArrayExpr A_Const A_Expr A_Expr_Kind A_Indices A_Indirection A_Star AbsoluteTime AccessMethodInfo AccessPriv Acl AclItem AclMaskHow AclMode AclResult AcquireSampleRowsFunc ActiveSnapshotElt AddForeignUpdateTargets_function AffixNode AffixNodeData AfterTriggerEvent AfterTriggerEventChunk AfterTriggerEventData AfterTriggerEventList AfterTriggerShared AfterTriggerSharedData AfterTriggersData AfterTriggersQueryData AfterTriggersTableData AfterTriggersTransData Agg AggClauseCosts AggInfo AggPath AggSplit AggState AggStatePerAgg AggStatePerGroup AggStatePerHash AggStatePerPhase AggStatePerTrans AggStrategy Aggref AggrefExprState AlenState Alias AllocBlock AllocChunk AllocPointer AllocSet AllocSetContext AllocSetFreeList AllocateDesc AllocateDescKind AlterCollationStmt AlterDatabaseSetStmt AlterDatabaseStmt AlterDefaultPrivilegesStmt AlterDomainStmt AlterEnumStmt AlterEventTrigStmt AlterExtensionContentsStmt AlterExtensionStmt AlterFdwStmt AlterForeignServerStmt AlterFunctionStmt AlterObjectDependsStmt AlterObjectSchemaStmt AlterOpFamilyStmt AlterOperatorStmt AlterOwnerStmt AlterPolicyStmt AlterPublicationStmt AlterRoleSetStmt AlterRoleStmt AlterSeqStmt AlterSubscriptionStmt AlterSubscriptionType AlterSystemStmt AlterTSConfigType AlterTSConfigurationStmt AlterTSDictionaryStmt AlterTableCmd AlterTableMoveAllStmt AlterTableSpaceOptionsStmt AlterTableStmt AlterTableType AlterUserMappingStmt AlteredTableInfo AlternativeSubPlan AlternativeSubPlanState AnalyzeAttrComputeStatsFunc AnalyzeAttrFetchFunc AnalyzeForeignTable_function AnlIndexData AnyArrayType Append AppendPath AppendRelInfo AppendState Archive ArchiveEntryPtrType ArchiveFormat ArchiveHandle ArchiveMode ArchiveOpts ArchiverOutput ArchiverStage ArrayAnalyzeExtraData ArrayBuildState ArrayBuildStateAny ArrayBuildStateArr ArrayCoerceExpr ArrayConstIterState ArrayExpr ArrayExprIterState ArrayIOData ArrayIterator ArrayMapState ArrayMetaState ArrayParseState ArrayType AsyncQueueControl AsyncQueueEntry AttInMetadata AttStatsSlot AttoptCacheEntry AttoptCacheKey AttrDefInfo AttrDefault AttrMissing AttrNumber AttributeOpts AuthRequest AutoPrewarmSharedState AutoVacOpts AutoVacuumShmemStruct AutoVacuumWorkItem AutoVacuumWorkItemType AuxProcType BF_ctx BF_key BF_word BF_word_signed BIGNUM BIO BIO_METHOD BITVEC BITVECP BMS_Comparison BMS_Membership BN_CTX BOOL BOOLEAN BOX BTArrayKeyInfo BTBuildState BTCycleId BTIndexStat BTInsertState BTInsertStateData BTLeader BTMetaPageData BTOneVacInfo BTPS_State BTPageOpaque BTPageOpaqueData BTPageStat BTPageState BTParallelScanDesc BTScanInsert BTScanInsertData BTScanOpaque BTScanOpaqueData BTScanPos BTScanPosData BTScanPosItem BTShared BTSortArrayContext BTSpool BTStack BTStackData BTVacInfo BTVacState BTWriteState BYTE Backend BackendId BackendParameters BackendState BackendType BackgroundWorker BackgroundWorkerArray BackgroundWorkerHandle BackgroundWorkerSlot Barrier BaseBackupCmd BeginDirectModify_function BeginForeignInsert_function BeginForeignModify_function BeginForeignScan_function BeginSampleScan_function BernoulliSamplerData BgWorkerStartTime BgwHandleStatus BinaryArithmFunc BipartiteMatchState BitmapAnd BitmapAndPath BitmapAndState BitmapHeapPath BitmapHeapScan BitmapHeapScanState BitmapIndexScan BitmapIndexScanState BitmapOr BitmapOrPath BitmapOrState Bitmapset BlobInfo Block BlockId BlockIdData BlockInfoRecord BlockNumber BlockSampler BlockSamplerData BlockedProcData BlockedProcsData BloomBuildState BloomMetaPageData BloomOptions BloomPageOpaque BloomPageOpaqueData BloomScanOpaque BloomScanOpaqueData BloomSignatureWord BloomState BloomTuple BlowfishContext BoolAggState BoolExpr BoolExprType BoolTestType BooleanTest BpChar BrinBuildState BrinDesc BrinMemTuple BrinMetaPageData BrinOpaque BrinOpcInfo BrinOptions BrinRevmap BrinSpecialSpace BrinStatsData BrinTuple BrinValues BtreeCheckState BtreeLevel Bucket BufFile Buffer BufferAccessStrategy BufferAccessStrategyType BufferCachePagesContext BufferCachePagesRec BufferDesc BufferDescPadded BufferHeapTupleTableSlot BufferLookupEnt BufferStrategyControl BufferTag BufferUsage BuildAccumulator BuiltinScript BulkInsertState BulkInsertStateData CACHESIGN CAC_state CCFastEqualFN CCHashFN CEOUC_WAIT_MODE CFuncHashTabEntry CHAR CHECKPOINT CHKVAL CIRCLE CMPDAffix CONTEXT COP CRITICAL_SECTION CRSSnapshotAction CState CTEMaterialize CV C_block CachedExpression CachedPlan CachedPlanSource CallContext CallStmt CancelRequestPacket CaseExpr CaseTestExpr CaseWhen Cash CastInfo CatCList CatCTup CatCache CatCacheHeader CatalogId CatalogIndexState ChangeVarNodes_context CheckPoint CheckPointStmt CheckpointStatsData CheckpointerRequest CheckpointerShmemStruct Chromosome CkptSortItem CkptTsStatus ClientAuthentication_hook_type ClientCertMode ClientData ClonePtrType ClosePortalStmt ClosePtrType Clump ClusterInfo ClusterStmt CmdType CoalesceExpr CoerceParamHook CoerceToDomain CoerceToDomainValue CoerceViaIO CoercionContext CoercionForm CoercionPathType CollAliasData CollInfo CollateClause CollateExpr CollateStrength CollectedATSubcmd CollectedCommand CollectedCommandType ColorTrgm ColorTrgmInfo ColumnCompareData ColumnDef ColumnIOData ColumnRef ColumnsHashData CombinationGenerator ComboCidEntry ComboCidEntryData ComboCidKey ComboCidKeyData Command CommandDest CommandId CommentItem CommentStmt CommitTimestampEntry CommitTimestampShared CommonEntry CommonTableExpr CompareScalarsContext CompiledExprState CompositeIOData CompositeTypeStmt CompoundAffixFlag CompressionAlgorithm CompressorState ConditionVariable ConditionalStack ConfigData ConfigVariable ConnCacheEntry ConnCacheKey ConnStatusType ConnType ConnectionStateEnum ConsiderSplitContext Const ConstrCheck ConstrType Constraint ConstraintCategory ConstraintInfo ConstraintsSetStmt ControlData ControlFileData ConvInfo ConvProcInfo ConversionLocation ConvertRowtypeExpr CookedConstraint CopyDest CopyInsertMethod CopyMultiInsertBuffer CopyMultiInsertInfo CopyState CopyStateData CopyStmt Cost CostSelector Counters CoverExt CoverPos CreateAmStmt CreateCastStmt CreateConversionStmt CreateDomainStmt CreateEnumStmt CreateEventTrigStmt CreateExtensionStmt CreateFdwStmt CreateForeignServerStmt CreateForeignTableStmt CreateFunctionStmt CreateOpClassItem CreateOpClassStmt CreateOpFamilyStmt CreatePLangStmt CreatePolicyStmt CreatePublicationStmt CreateRangeStmt CreateReplicationSlotCmd CreateRoleStmt CreateSchemaStmt CreateSchemaStmtContext CreateSeqStmt CreateStatsStmt CreateStmt CreateStmtContext CreateSubscriptionStmt CreateTableAsStmt CreateTableSpaceStmt CreateTransformStmt CreateTrigStmt CreateUserMappingStmt CreatedbStmt CredHandle CteItem CteScan CteScanState CteState CtlCommand CtxtHandle CurrentOfExpr CustomExecMethods CustomOutPtrType CustomPath CustomScan CustomScanMethods CustomScanState CycleCtr DBState DCHCacheEntry DEADLOCK_INFO DECountItem DH DIR DNSServiceErrorType DNSServiceRef DR_copy DR_intorel DR_printtup DR_sqlfunction DR_transientrel DSA DWORD DataDumperPtr DataPageDeleteStack DateADT Datum DatumTupleFields DbInfo DbInfoArr DeClonePtrType DeadLockState DeallocateStmt DeclareCursorStmt DecodedBkpBlock DecodingOutputState DefElem DefElemAction DefaultACLInfo DefineStmt DeleteStmt DependencyGenerator DependencyGeneratorData DependencyType DestReceiver DictISpell DictInt DictSimple DictSnowball DictSubState DictSyn DictThesaurus DimensionInfo DirectoryMethodData DirectoryMethodFile DisableTimeoutParams DiscardMode DiscardStmt DistinctExpr DoStmt DocRepresentation DomainConstraintCache DomainConstraintRef DomainConstraintState DomainConstraintType DomainIOData DropBehavior DropOwnedStmt DropReplicationSlotCmd DropRoleStmt DropStmt DropSubscriptionStmt DropTableSpaceStmt DropUserMappingStmt DropdbStmt DumpComponents DumpId DumpOptions DumpSignalInformation DumpableObject DumpableObjectType DynamicFileList DynamicZoneAbbrev EC_KEY EDGE ENGINE EOM_flatten_into_method EOM_get_flat_size_method EPQState EPlan EState EVP_CIPHER EVP_CIPHER_CTX EVP_MD EVP_MD_CTX EVP_PKEY EachState Edge EditableObjectType ElementsState EnableTimeoutParams EndBlobPtrType EndBlobsPtrType EndDataPtrType EndDirectModify_function EndForeignInsert_function EndForeignModify_function EndForeignScan_function EndSampleScan_function EnumItem EolType EphemeralNameRelationType EphemeralNamedRelation EphemeralNamedRelationData EphemeralNamedRelationMetadata EphemeralNamedRelationMetadataData EquivalenceClass EquivalenceMember ErrorContextCallback ErrorData EstimateDSMForeignScan_function EventTriggerCacheEntry EventTriggerCacheItem EventTriggerCacheStateType EventTriggerData EventTriggerEvent EventTriggerInfo EventTriggerQueryState ExceptionLabelMap ExceptionMap ExclusiveBackupState ExecAuxRowMark ExecEvalSubroutine ExecForeignDelete_function ExecForeignInsert_function ExecForeignUpdate_function ExecParallelEstimateContext ExecParallelInitializeDSMContext ExecPhraseData ExecProcNodeMtd ExecRowMark ExecScanAccessMtd ExecScanRecheckMtd ExecStatus ExecStatusType ExecuteStmt ExecutorCheckPerms_hook_type ExecutorEnd_hook_type ExecutorFinish_hook_type ExecutorRun_hook_type ExecutorStart_hook_type ExpandedArrayHeader ExpandedObjectHeader ExpandedObjectMethods ExpandedRecordFieldInfo ExpandedRecordHeader ExplainDirectModify_function ExplainForeignModify_function ExplainForeignScan_function ExplainFormat ExplainOneQuery_hook_type ExplainState ExplainStmt ExportedSnapshot Expr ExprContext ExprContextCallbackFunction ExprContext_CB ExprDoneCond ExprEvalOp ExprEvalOpLookup ExprEvalStep ExprState ExprStateEvalFunc ExtensibleNode ExtensibleNodeEntry ExtensibleNodeMethods ExtensionControlFile ExtensionInfo ExtensionMemberId ExtensionVersionInfo FDWCollateState FD_SET FILE FILETIME FSMAddress FSMPage FSMPageData FakeRelCacheEntry FakeRelCacheEntryData FastPathStrongRelationLockData FdwInfo FdwRoutine FetchDirection FetchStmt FieldNot FieldSelect FieldStore File FileFdwExecutionState FileFdwPlanState FileNameMap FileTag FinalPathExtraData FindSplitData FindSplitStrat FixedParallelExecutorState FixedParallelState FixedParamState FlagMode FlushPosition FmgrBuiltin FmgrHookEventType FmgrInfo ForeignDataWrapper ForeignKeyCacheInfo ForeignKeyOptInfo ForeignPath ForeignScan ForeignScanState ForeignServer ForeignServerInfo ForeignTable ForkNumber FormData_pg_aggregate FormData_pg_am FormData_pg_amop FormData_pg_amproc FormData_pg_attrdef FormData_pg_attribute FormData_pg_auth_members FormData_pg_authid FormData_pg_cast FormData_pg_class FormData_pg_collation FormData_pg_constraint FormData_pg_conversion FormData_pg_database FormData_pg_default_acl FormData_pg_depend FormData_pg_enum FormData_pg_event_trigger FormData_pg_extension FormData_pg_foreign_data_wrapper FormData_pg_foreign_server FormData_pg_foreign_table FormData_pg_index FormData_pg_inherits FormData_pg_language FormData_pg_largeobject FormData_pg_largeobject_metadata FormData_pg_namespace FormData_pg_opclass FormData_pg_operator FormData_pg_opfamily FormData_pg_partitioned_table FormData_pg_pltemplate FormData_pg_policy FormData_pg_proc FormData_pg_publication FormData_pg_publication_rel FormData_pg_range FormData_pg_replication_origin FormData_pg_rewrite FormData_pg_sequence FormData_pg_sequence_data FormData_pg_shdepend FormData_pg_statistic FormData_pg_statistic_ext FormData_pg_subscription FormData_pg_subscription_rel FormData_pg_tablespace FormData_pg_transform FormData_pg_trigger FormData_pg_ts_config FormData_pg_ts_config_map FormData_pg_ts_dict FormData_pg_ts_parser FormData_pg_ts_template FormData_pg_type FormData_pg_user_mapping Form_pg_aggregate Form_pg_am Form_pg_amop Form_pg_amproc Form_pg_attrdef Form_pg_attribute Form_pg_auth_members Form_pg_authid Form_pg_cast Form_pg_class Form_pg_collation Form_pg_constraint Form_pg_conversion Form_pg_database Form_pg_default_acl Form_pg_depend Form_pg_enum Form_pg_event_trigger Form_pg_extension Form_pg_foreign_data_wrapper Form_pg_foreign_server Form_pg_foreign_table Form_pg_index Form_pg_inherits Form_pg_language Form_pg_largeobject Form_pg_largeobject_metadata Form_pg_namespace Form_pg_opclass Form_pg_operator Form_pg_opfamily Form_pg_partitioned_table Form_pg_pltemplate Form_pg_policy Form_pg_proc Form_pg_publication Form_pg_publication_rel Form_pg_range Form_pg_replication_origin Form_pg_rewrite Form_pg_sequence Form_pg_sequence_data Form_pg_shdepend Form_pg_statistic Form_pg_statistic_ext Form_pg_subscription Form_pg_subscription_rel Form_pg_tablespace Form_pg_transform Form_pg_trigger Form_pg_ts_config Form_pg_ts_config_map Form_pg_ts_dict Form_pg_ts_parser Form_pg_ts_template Form_pg_type Form_pg_user_mapping FormatNode FreeBlockNumberArray FreeListData FreePageBtree FreePageBtreeHeader FreePageBtreeInternalKey FreePageBtreeLeafKey FreePageBtreeSearchResult FreePageManager FreePageSpanLeader FromCharDateMode FromExpr FullTransactionId FuncCall FuncCallContext FuncCandidateList FuncDetailCode FuncExpr FuncInfo FuncLookupError Function FunctionCallInfo FunctionCallInfoBaseData FunctionParameter FunctionParameterMode FunctionScan FunctionScanPerFuncState FunctionScanState FuzzyAttrMatchState GBT_NUMKEY GBT_NUMKEY_R GBT_VARKEY GBT_VARKEY_R GENERAL_NAME GISTBuildBuffers GISTBuildState GISTENTRY GISTInsertStack GISTInsertState GISTNodeBuffer GISTNodeBufferPage GISTPageOpaque GISTPageOpaqueData GISTPageSplitInfo GISTSTATE GISTScanOpaque GISTScanOpaqueData GISTSearchHeapItem GISTSearchItem GISTTYPE GIST_SPLITVEC GMReaderTupleBuffer GV Gather GatherMerge GatherMergePath GatherMergeState GatherPath GatherState Gene GeneratePruningStepsContext GenerationBlock GenerationChunk GenerationContext GenerationPointer GenericCosts GenericXLogState GeqoPrivateData GetForeignJoinPaths_function GetForeignPaths_function GetForeignPlan_function GetForeignRelSize_function GetForeignRowMarkType_function GetForeignUpperPaths_function GetState GiSTOptions GinBtree GinBtreeData GinBtreeDataLeafInsertData GinBtreeEntryInsertData GinBtreeStack GinBuildState GinChkVal GinEntries GinEntryAccumulator GinIndexStat GinMetaPageData GinNullCategory GinOptions GinPageOpaque GinPageOpaqueData GinPlaceToPageRC GinPostingList GinQualCounts GinScanEntry GinScanKey GinScanOpaque GinScanOpaqueData GinState GinStatsData GinTernaryValue GinTupleCollector GinVacuumState GistBufferingMode GistBulkDeleteResult GistEntryVector GistInetKey GistNSN GistSplitUnion GistSplitVector GistVacState GlobalTransaction GrantRoleStmt GrantStmt GrantTargetType Group GroupPath GroupPathExtraData GroupResultPath GroupState GroupVarInfo GroupingFunc GroupingSet GroupingSetData GroupingSetKind GroupingSetsPath GucAction GucBoolAssignHook GucBoolCheckHook GucContext GucEnumAssignHook GucEnumCheckHook GucIntAssignHook GucIntCheckHook GucRealAssignHook GucRealCheckHook GucShowHook GucSource GucStack GucStackState GucStringAssignHook GucStringCheckHook HANDLE HASHACTION HASHBUCKET HASHCTL HASHELEMENT HASHHDR HASHSEGMENT HASH_SEQ_STATUS HCRYPTPROV HE HEntry HIST_ENTRY HKEY HLOCAL HMODULE HOldEntry HRESULT HSParser HSpool HStore HTAB HTSV_Result HV Hash HashAllocFunc HashBuildState HashCompareFunc HashCopyFunc HashIndexStat HashInstrumentation HashJoin HashJoinState HashJoinTable HashJoinTuple HashMemoryChunk HashMetaPage HashMetaPageData HashPageOpaque HashPageOpaqueData HashPageStat HashPath HashScanOpaque HashScanOpaqueData HashScanPosData HashScanPosItem HashSkewBucket HashState HashValueFunc HbaLine HbaToken HeadlineJsonState HeadlineParsedText HeadlineWordEntry HeapScanDesc HeapTuple HeapTupleData HeapTupleFields HeapTupleHeader HeapTupleHeaderData HeapTupleTableSlot HistControl HotStandbyState I32 ICU_Convert_Func ID INFIX INT128 INTERFACE_INFO IOFuncSelector IPCompareMethod ITEM IV IdentLine IdentifierLookup IdentifySystemCmd IfStackElem ImportForeignSchemaStmt ImportForeignSchemaType ImportForeignSchema_function ImportQual IncludeWal InclusionOpaque IncrementVarSublevelsUp_context Index IndexAMProperty IndexAmRoutine IndexArrayKeyInfo IndexAttachInfo IndexAttrBitmapKind IndexBuildCallback IndexBuildResult IndexBulkDeleteCallback IndexBulkDeleteResult IndexClause IndexClauseSet IndexElem IndexFetchHeapData IndexFetchTableData IndexInfo IndexList IndexOnlyScan IndexOnlyScanState IndexOptInfo IndexPath IndexRuntimeKeyInfo IndexScan IndexScanDesc IndexScanState IndexStateFlagsAction IndexStmt IndexTuple IndexTupleData IndexUniqueCheck IndexVacuumInfo IndxInfo InferClause InferenceElem InfoItem InhInfo InheritableSocket InheritanceKind InitSampleScan_function InitializeDSMForeignScan_function InitializeWorkerForeignScan_function InlineCodeBlock InsertStmt Instrumentation Int128AggState Int8TransTypeData IntRBTreeNode IntegerSet InternalDefaultACL InternalGrant Interval IntoClause InvalidationChunk InvalidationListHeader IpcMemoryId IpcMemoryKey IpcMemoryState IpcSemaphoreId IpcSemaphoreKey IsForeignRelUpdatable_function IsForeignScanParallelSafe_function IspellDict Item ItemId ItemIdData ItemPointer ItemPointerData IterateDirectModify_function IterateForeignScan_function IterateJsonStringValuesState JEntry JHashState JOBOBJECTINFOCLASS JOBOBJECT_BASIC_LIMIT_INFORMATION JOBOBJECT_BASIC_UI_RESTRICTIONS JOBOBJECT_SECURITY_LIMIT_INFORMATION JitContext JitInstrumentation JitProviderCallbacks JitProviderCompileExprCB JitProviderInit JitProviderReleaseContextCB JitProviderResetAfterErrorCB Join JoinCostWorkspace JoinExpr JoinHashEntry JoinPath JoinPathExtraData JoinState JoinType JsObject JsValue JsonAggState JsonBaseObjectInfo JsonHashEntry JsonIterateStringValuesAction JsonLexContext JsonLikeRegexContext JsonParseContext JsonPath JsonPathBool JsonPathExecContext JsonPathExecResult JsonPathGinAddPathItemFunc JsonPathGinContext JsonPathGinExtractNodesFunc JsonPathGinNode JsonPathGinNodeType JsonPathGinPath JsonPathGinPathItem JsonPathItem JsonPathItemType JsonPathKeyword JsonPathParseItem JsonPathParseResult JsonPathPredicateCallback JsonPathString JsonSemAction JsonTokenType JsonTransformStringValuesAction JsonTypeCategory JsonValueList JsonValueListIterator Jsonb JsonbAggState JsonbContainer JsonbInState JsonbIterState JsonbIterator JsonbIteratorToken JsonbPair JsonbParseState JsonbTypeCategory JsonbValue JunkFilter KeyArray KeySuffix KeyWord LARGE_INTEGER LDAP LDAPMessage LDAPURLDesc LDAP_TIMEVAL LINE LLVMAttributeRef LLVMBasicBlockRef LLVMBuilderRef LLVMIntPredicate LLVMJitContext LLVMJitHandle LLVMMemoryBufferRef LLVMModuleRef LLVMOrcJITStackRef LLVMOrcModuleHandle LLVMOrcTargetAddress LLVMPassManagerBuilderRef LLVMPassManagerRef LLVMSharedModuleRef LLVMTargetMachineRef LLVMTargetRef LLVMTypeRef LLVMValueRef LOCALLOCK LOCALLOCKOWNER LOCALLOCKTAG LOCALPREDICATELOCK LOCK LOCKMASK LOCKMETHODID LOCKMODE LOCKTAG LONG LONG_PTR LOOP LPBYTE LPCTSTR LPCWSTR LPDWORD LPSECURITY_ATTRIBUTES LPSERVICE_STATUS LPSTR LPTHREAD_START_ROUTINE LPTSTR LPVOID LPWSTR LSEG LUID LVRelStats LWLock LWLockHandle LWLockMinimallyPadded LWLockMode LWLockPadded LabelProvider LagTracker LargeObjectDesc LastAttnumInfo Latch LerpFunc LexDescr LexemeEntry LexemeHashKey LexemeInfo LexemeKey LexizeData LibraryInfo Limit LimitPath LimitState LimitStateCond List ListCell ListDictionary ListParsedLex ListenAction ListenActionKind ListenStmt LoadStmt LocalBufferLookupEnt LocalPgBackendStatus LocalTransactionId LocationIndex LockAcquireResult LockClauseStrength LockData LockInfoData LockInstanceData LockMethod LockMethodData LockRelId LockRows LockRowsPath LockRowsState LockStmt LockTagType LockTupleMode LockViewRecurse_context LockWaitPolicy LockingClause LogOpts LogStmtLevel LogicalDecodeBeginCB LogicalDecodeChangeCB LogicalDecodeCommitCB LogicalDecodeFilterByOriginCB LogicalDecodeMessageCB LogicalDecodeShutdownCB LogicalDecodeStartupCB LogicalDecodeTruncateCB LogicalDecodingContext LogicalErrorCallbackState LogicalOutputPluginInit LogicalOutputPluginWriterPrepareWrite LogicalOutputPluginWriterUpdateProgress LogicalOutputPluginWriterWrite LogicalRepBeginData LogicalRepCommitData LogicalRepCtxStruct LogicalRepRelId LogicalRepRelMapEntry LogicalRepRelation LogicalRepTupleData LogicalRepTyp LogicalRepWorker LogicalRepWorkerId LogicalRewriteMappingData LogicalTape LogicalTapeSet MAGIC MBuf MCVItem MCVList MEMORY_BASIC_INFORMATION MINIDUMPWRITEDUMP MINIDUMP_TYPE MJEvalResult MVDependencies MVDependency MVNDistinct MVNDistinctItem Material MaterialPath MaterialState MdfdVec MemoryContext MemoryContextCallback MemoryContextCallbackFunction MemoryContextCounters MemoryContextData MemoryContextMethods MemoryStatsPrintFunc MergeAppend MergeAppendPath MergeAppendState MergeJoin MergeJoinClause MergeJoinState MergePath MergeScanSelCache MetaCommand MinMaxAggInfo MinMaxAggPath MinMaxExpr MinMaxOp MinimalTuple MinimalTupleData MinimalTupleTableSlot MinmaxOpaque ModifyTable ModifyTablePath ModifyTableState MorphOpaque MsgType MultiAssignRef MultiSortSupport MultiSortSupportData MultiXactId MultiXactMember MultiXactOffset MultiXactStateData MultiXactStatus MyData NDBOX NODE NUMCacheEntry NUMDesc NUMProc NV Name NameData NameHashEntry NamedArgExpr NamedLWLockTranche NamedLWLockTrancheRequest NamedTuplestoreScan NamedTuplestoreScanState NamespaceInfo NestLoop NestLoopParam NestLoopState NestPath NewColumnValue NewConstraint NextSampleBlock_function NextSampleTuple_function NextValueExpr Node NodeTag NonEmptyRange Notification NotifyStmt Nsrt NullIfExpr NullTest NullTestType NullableDatum Numeric NumericAggState NumericDigit NumericSortSupport NumericSumAccum NumericVar OM_uint32 OP OSAPerGroupState OSAPerQueryState OSInfo OSSLCipher OSSLDigest OSVERSIONINFO OVERLAPPED ObjectAccessDrop ObjectAccessNamespaceSearch ObjectAccessPostAlter ObjectAccessPostCreate ObjectAccessType ObjectAddress ObjectAddressAndFlags ObjectAddressExtra ObjectAddressStack ObjectAddresses ObjectClass ObjectPropertyType ObjectType ObjectWithArgs Offset OffsetNumber OffsetVarNodes_context Oid OidOptions OkeysState OldSerXidControl OldSnapshotControlData OldToNewMapping OldToNewMappingData OldTriggerInfo OnCommitAction OnCommitItem OnConflictAction OnConflictClause OnConflictExpr OnConflictSetState OpBtreeInterpretation OpClassCacheEnt OpExpr OpFamilyMember OpFamilyOpFuncGroup OpclassInfo Operator OperatorElement OpfamilyInfo OprCacheEntry OprCacheKey OprInfo OprProofCacheEntry OprProofCacheKey OutputContext OutputPluginCallbacks OutputPluginOptions OutputPluginOutputType OverrideSearchPath OverrideStackEntry OverridingKind PACE_HEADER PACL PATH PBOOL PCtxtHandle PFN PGAlignedBlock PGAlignedXLogBlock PGAsyncStatusType PGCALL2 PGChecksummablePage PGContextVisibility PGEvent PGEventConnDestroy PGEventConnReset PGEventId PGEventProc PGEventRegister PGEventResultCopy PGEventResultCreate PGEventResultDestroy PGFInfoFunction PGFunction PGLZ_HistEntry PGLZ_Strategy PGMessageField PGModuleMagicFunction PGNoticeHooks PGOutputData PGPROC PGP_CFB PGP_Context PGP_MPI PGP_PubKey PGP_S2K PGPing PGQueryClass PGRUsage PGSemaphore PGSemaphoreData PGSetenvStatusType PGShmemHeader PGTransactionStatusType PGVerbosity PGXACT PG_Locale_Strategy PG_Lock_Status PG_init_t PGcancel PGconn PGdataValue PGlobjfuncs PGnotify PGresAttDesc PGresAttValue PGresParamDesc PGresult PGresult_data PHANDLE PLAINTREE PLTemplate PLUID_AND_ATTRIBUTES PLcword PLpgSQL_arrayelem PLpgSQL_case_when PLpgSQL_condition PLpgSQL_datum PLpgSQL_datum_type PLpgSQL_diag_item PLpgSQL_exception PLpgSQL_exception_block PLpgSQL_execstate PLpgSQL_expr PLpgSQL_func_hashkey PLpgSQL_function PLpgSQL_getdiag_kind PLpgSQL_if_elsif PLpgSQL_label_type PLpgSQL_nsitem PLpgSQL_nsitem_type PLpgSQL_plugin PLpgSQL_promise_type PLpgSQL_raise_option PLpgSQL_raise_option_type PLpgSQL_rec PLpgSQL_recfield PLpgSQL_resolve_option PLpgSQL_row PLpgSQL_stmt PLpgSQL_stmt_assert PLpgSQL_stmt_assign PLpgSQL_stmt_block PLpgSQL_stmt_call PLpgSQL_stmt_case PLpgSQL_stmt_close PLpgSQL_stmt_commit PLpgSQL_stmt_dynexecute PLpgSQL_stmt_dynfors PLpgSQL_stmt_execsql PLpgSQL_stmt_exit PLpgSQL_stmt_fetch PLpgSQL_stmt_forc PLpgSQL_stmt_foreach_a PLpgSQL_stmt_fori PLpgSQL_stmt_forq PLpgSQL_stmt_fors PLpgSQL_stmt_getdiag PLpgSQL_stmt_if PLpgSQL_stmt_loop PLpgSQL_stmt_open PLpgSQL_stmt_perform PLpgSQL_stmt_raise PLpgSQL_stmt_return PLpgSQL_stmt_return_next PLpgSQL_stmt_return_query PLpgSQL_stmt_rollback PLpgSQL_stmt_set PLpgSQL_stmt_type PLpgSQL_stmt_while PLpgSQL_trigtype PLpgSQL_type PLpgSQL_type_type PLpgSQL_var PLpgSQL_variable PLwdatum PLword PLyArrayToOb PLyCursorObject PLyDatumToOb PLyDatumToObFunc PLyExceptionEntry PLyExecutionContext PLyObToArray PLyObToDatum PLyObToDatumFunc PLyObToDomain PLyObToScalar PLyObToTransform PLyObToTuple PLyObject_AsString_t PLyPlanObject PLyProcedure PLyProcedureEntry PLyProcedureKey PLyResultObject PLySRFState PLySavedArgs PLyScalarToOb PLySubtransactionData PLySubtransactionObject PLyTransformToOb PLyTupleToOb PLyUnicode_FromStringAndSize_t PLy_elog_impl_t PMINIDUMP_CALLBACK_INFORMATION PMINIDUMP_EXCEPTION_INFORMATION PMINIDUMP_USER_STREAM_INFORMATION PMSignalData PMSignalReason PMState POLYGON PQArgBlock PQEnvironmentOption PQExpBuffer PQExpBufferData PQcommMethods PQconninfoOption PQnoticeProcessor PQnoticeReceiver PQprintOpt PREDICATELOCK PREDICATELOCKTAG PREDICATELOCKTARGET PREDICATELOCKTARGETTAG PROCESS_INFORMATION PROCLOCK PROCLOCKTAG PROC_HDR PROC_QUEUE PSID PSID_AND_ATTRIBUTES PSQL_COMP_CASE PSQL_ECHO PSQL_ECHO_HIDDEN PSQL_ERROR_ROLLBACK PTEntryArray PTIterationArray PTOKEN_PRIVILEGES PTOKEN_USER PUTENVPROC PVOID PX_Alias PX_Cipher PX_Combo PX_HMAC PX_MD Page PageData PageGistNSN PageHeader PageHeaderData PageXLogRecPtr PagetableEntry Pairs ParallelAppendState ParallelBitmapHeapState ParallelBlockTableScanDesc ParallelCompletionPtr ParallelContext ParallelExecutorInfo ParallelHashGrowth ParallelHashJoinBatch ParallelHashJoinBatchAccessor ParallelHashJoinState ParallelIndexScanDesc ParallelReadyList ParallelSlot ParallelState ParallelTableScanDesc ParallelTableScanDescData ParallelWorkerContext ParallelWorkerInfo Param ParamCompileHook ParamExecData ParamExternData ParamFetchHook ParamKind ParamListInfo ParamPathInfo ParamRef ParentMapEntry ParseCallbackState ParseExprKind ParseNamespaceItem ParseParamRefHook ParseState ParsedLex ParsedScript ParsedText ParsedWord ParserSetupHook ParserState PartClauseInfo PartClauseMatchStatus PartClauseTarget PartitionBoundInfo PartitionBoundInfoData PartitionBoundSpec PartitionCmd PartitionDesc PartitionDescData PartitionDirectory PartitionDirectoryEntry PartitionDispatch PartitionElem PartitionHashBound PartitionKey PartitionListValue PartitionPruneCombineOp PartitionPruneContext PartitionPruneInfo PartitionPruneState PartitionPruneStep PartitionPruneStepCombine PartitionPruneStepOp PartitionPruningData PartitionRangeBound PartitionRangeDatum PartitionRangeDatumKind PartitionRoutingInfo PartitionScheme PartitionSpec PartitionTupleRouting PartitionedRelPruneInfo PartitionedRelPruningData PartitionwiseAggregateType PasswordType Path PathClauseUsage PathCostComparison PathHashStack PathKey PathKeysComparison PathTarget Pattern_Prefix_Status Pattern_Type PendingFsyncEntry PendingRelDelete PendingUnlinkEntry PendingWriteback PerlInterpreter Perl_check_t Perl_ppaddr_t Permutation PgBackendGSSStatus PgBackendSSLStatus PgBackendStatus PgBenchExpr PgBenchExprLink PgBenchExprList PgBenchExprType PgBenchFunction PgBenchValue PgBenchValueType PgChecksumMode PgFdwAnalyzeState PgFdwDirectModifyState PgFdwModifyState PgFdwOption PgFdwPathExtraData PgFdwRelationInfo PgFdwScanState PgIfAddrCallback PgStat_ArchiverStats PgStat_BackendFunctionEntry PgStat_Counter PgStat_FunctionCallUsage PgStat_FunctionCounts PgStat_FunctionEntry PgStat_GlobalStats PgStat_Msg PgStat_MsgAnalyze PgStat_MsgArchiver PgStat_MsgAutovacStart PgStat_MsgBgWriter PgStat_MsgChecksumFailure PgStat_MsgDeadlock PgStat_MsgDropdb PgStat_MsgDummy PgStat_MsgFuncpurge PgStat_MsgFuncstat PgStat_MsgHdr PgStat_MsgInquiry PgStat_MsgRecoveryConflict PgStat_MsgResetcounter PgStat_MsgResetsharedcounter PgStat_MsgResetsinglecounter PgStat_MsgTabpurge PgStat_MsgTabstat PgStat_MsgTempFile PgStat_MsgVacuum PgStat_Shared_Reset_Target PgStat_Single_Reset_Type PgStat_StatDBEntry PgStat_StatFuncEntry PgStat_StatTabEntry PgStat_SubXactStatus PgStat_TableCounts PgStat_TableEntry PgStat_TableStatus PgStat_TableXactStatus PgXmlErrorContext PgXmlStrictness Pg_finfo_record Pg_magic_struct PipeProtoChunk PipeProtoHeader PlaceHolderInfo PlaceHolderVar Plan PlanDirectModify_function PlanForeignModify_function PlanInvalItem PlanRowMark PlanState PlannedStmt PlannerGlobal PlannerInfo PlannerParamItem Point Pointer PolicyInfo PolyNumAggState Pool PopulateArrayContext PopulateArrayState PopulateRecordCache PopulateRecordsetCache PopulateRecordsetState Port Portal PortalHashEnt PortalStatus PortalStrategy PostParseColumnRefHook PostgresPollingStatusType PostingItem PostponedQual PreParseColumnRefHook PredClass PredIterInfo PredIterInfoData PredXactList PredXactListElement PredicateLockData PredicateLockTargetType PrepParallelRestorePtrType PrepareStmt PreparedParamsData PreparedStatement PrewarmType PrintExtraTocPtrType PrintTocDataPtrType PrintfArgType PrintfArgValue PrintfTarget PrinttupAttrInfo PrivTarget PrivateRefCountEntry ProcArrayStruct ProcLangInfo ProcSignalReason ProcSignalSlot ProcState ProcessUtilityContext ProcessUtility_hook_type ProcessingMode ProgressCommandType ProjectSet ProjectSetPath ProjectSetState ProjectionInfo ProjectionPath ProtocolVersion PrsStorage PruneState PruneStepResult PsqlScanCallbacks PsqlScanQuoteType PsqlScanResult PsqlScanState PsqlScanStateData PsqlSettings Publication PublicationActions PublicationInfo PublicationRelInfo PullFilter PullFilterOps PushFilter PushFilterOps PushFunction PyCFunction PyCodeObject PyMappingMethods PyMethodDef PyModuleDef PyObject PySequenceMethods PyTypeObject Py_ssize_t QPRS_STATE QTN2QTState QTNode QUERYTYPE QUERY_SECURITY_CONTEXT_TOKEN_FN QualCost QualItem Query QueryDesc QueryEnvironment QueryInfo QueryItem QueryItemType QueryMode QueryOperand QueryOperator QueryRepresentation QueryRepresentationOperand QuerySource QueueBackendStatus QueuePosition RBTNode RBTOrderControl RBTree RBTreeIterator REPARSE_JUNCTION_DATA_BUFFER RIX RI_CompareHashEntry RI_CompareKey RI_ConstraintInfo RI_QueryHashEntry RI_QueryKey RTEKind RWConflict RWConflictPoolHeader RandomState Range RangeBound RangeBox RangeFunction RangeIOData RangeQueryClause RangeSubselect RangeTableFunc RangeTableFuncCol RangeTableSample RangeTblEntry RangeTblFunction RangeTblRef RangeType RangeVar RangeVarGetRelidCallback RawColumnDefault RawStmt ReInitializeDSMForeignScan_function ReScanForeignScan_function ReadBufPtrType ReadBufferMode ReadBytePtrType ReadExtraTocPtrType ReadFunc ReassignOwnedStmt RecheckForeignScan_function RecordCacheEntry RecordCompareData RecordIOData RecoveryLockListsEntry RecoveryTargetTimeLineGoal RecoveryTargetType RectBox RecursionContext RecursiveUnion RecursiveUnionPath RecursiveUnionState RefetchForeignRow_function RefreshMatViewStmt RegProcedure Regis RegisNode RegisteredBgWorker ReindexObjectType ReindexStmt RelFileNode RelFileNodeBackend RelIdCacheEnt RelInfo RelInfoArr RelMapFile RelMapping RelOptInfo RelOptKind RelToCheck RelToCluster RelabelType Relation RelationData RelationPtr RelationSyncEntry RelcacheCallbackFunction RelfilenodeMapEntry RelfilenodeMapKey Relids RelocationBufferInfo RelptrFreePageBtree RelptrFreePageManager RelptrFreePageSpanLeader RenameStmt ReopenPtrType ReorderBuffer ReorderBufferApplyChangeCB ReorderBufferApplyTruncateCB ReorderBufferBeginCB ReorderBufferChange ReorderBufferCommitCB ReorderBufferDiskChange ReorderBufferIterTXNEntry ReorderBufferIterTXNState ReorderBufferMessageCB ReorderBufferTXN ReorderBufferTXNByIdEnt ReorderBufferToastEnt ReorderBufferTupleBuf ReorderBufferTupleCidEnt ReorderBufferTupleCidKey ReorderTuple RepOriginId ReparameterizeForeignPathByChild_function ReplaceVarsFromTargetList_context ReplaceVarsNoMatchOption ReplicaIdentityStmt ReplicationKind ReplicationSlot ReplicationSlotCtlData ReplicationSlotOnDisk ReplicationSlotPersistency ReplicationSlotPersistentData ReplicationState ReplicationStateCtl ReplicationStateOnDisk ResTarget ReservoirState ReservoirStateData ResourceArray ResourceOwner ResourceReleaseCallback ResourceReleaseCallbackItem ResourceReleasePhase RestoreOptions RestorePass RestrictInfo Result ResultRelInfo ResultState ReturnSetInfo RevmapContents RewriteMappingDataEntry RewriteMappingFile RewriteRule RewriteState RmgrData RmgrDescData RmgrId RmgrIds RoleSpec RoleSpecType RoleStmtType RollupData RowCompareExpr RowCompareType RowExpr RowMarkClause RowMarkType RowSecurityDesc RowSecurityPolicy RuleInfo RuleLock RuleStmt RunningTransactions RunningTransactionsData SC_HANDLE SECURITY_ATTRIBUTES SECURITY_STATUS SEG SERIALIZABLEXACT SERIALIZABLEXID SERIALIZABLEXIDTAG SERVICE_STATUS SERVICE_STATUS_HANDLE SERVICE_TABLE_ENTRY SHA1_CTX SHA256_CTX SHA512_CTX SHM_QUEUE SID_AND_ATTRIBUTES SID_IDENTIFIER_AUTHORITY SID_NAME_USE SISeg SIZE_T SMgrRelation SMgrRelationData SOCKADDR SOCKET SPELL SPIPlanPtr SPITupleTable SPLITCOST SPNode SPNodeData SPPageDesc SQLCmd SQLDropObject SQLFunctionCache SQLFunctionCachePtr SQLFunctionParseInfoPtr SQLValueFunction SQLValueFunctionOp SSL SSLExtensionInfoContext SSL_CTX STARTUPINFO STRLEN SV SampleScan SampleScanGetSampleSize_function SampleScanState SamplerRandomState ScalarArrayOpExpr ScalarIOData ScalarItem ScalarMCVItem Scan ScanDirection ScanKey ScanKeyData ScanKeywordHashFunc ScanKeywordList ScanState ScanTypeControl SchemaQuery SecBuffer SecBufferDesc SecLabelItem SecLabelStmt SeenRelsEntry SelectStmt Selectivity SemTPadded SemiAntiJoinFactors SeqScan SeqScanState SeqTable SeqTableData SerCommitSeqNo SerializableXactHandle SerializedActiveRelMaps SerializedReindexState SerializedSnapshotData SerializedTransactionState Session SessionBackupState SetConstraintState SetConstraintStateData SetConstraintTriggerData SetExprState SetFunctionReturnMode SetOp SetOpCmd SetOpPath SetOpState SetOpStatePerGroup SetOpStrategy SetOperation SetOperationStmt SetToDefault SetupWorkerPtrType ShDependObjectInfo SharedBitmapState SharedDependencyObjectType SharedDependencyType SharedExecutorInstrumentation SharedFileSet SharedHashInfo SharedInvalCatalogMsg SharedInvalCatcacheMsg SharedInvalRelcacheMsg SharedInvalRelmapMsg SharedInvalSmgrMsg SharedInvalSnapshotMsg SharedInvalidationMessage SharedJitInstrumentation SharedRecordTableEntry SharedRecordTableKey SharedRecordTypmodRegistry SharedSortInfo SharedTuplestore SharedTuplestoreAccessor SharedTuplestoreChunk SharedTuplestoreParticipant SharedTypmodTableEntry Sharedsort ShellTypeInfo ShippableCacheEntry ShippableCacheKey ShmemIndexEnt ShutdownForeignScan_function ShutdownInformation ShutdownMode SignTSVector SimpleActionList SimpleActionListCell SimpleEcontextStackEntry SimpleOidList SimpleOidListCell SimpleStats SimpleStringList SimpleStringListCell SingleBoundSortItem Size SlabBlock SlabChunk SlabContext SlabSlot SlotErrCallbackArg SlotNumber SlruCtl SlruCtlData SlruErrorCause SlruFlush SlruFlushData SlruPageStatus SlruScanCallback SlruShared SlruSharedData SnapBuild SnapBuildOnDisk SnapBuildState Snapshot SnapshotData SnapshotType SockAddr Sort SortBy SortByDir SortByNulls SortCoordinate SortGroupClause SortItem SortPath SortShimExtra SortState SortSupport SortSupportData SortTuple SortTupleComparator SortedPoint SpGistBuildState SpGistCache SpGistDeadTuple SpGistDeadTupleData SpGistInnerTuple SpGistInnerTupleData SpGistLUPCache SpGistLastUsedPage SpGistLeafTuple SpGistLeafTupleData SpGistMetaPageData SpGistNodeTuple SpGistNodeTupleData SpGistPageOpaque SpGistPageOpaqueData SpGistScanOpaque SpGistScanOpaqueData SpGistSearchItem SpGistState SpGistTypeDesc SpecialJoinInfo SpinDelayStatus SplitInterval SplitLR SplitPoint SplitVar SplitedPageLayout StackElem StartBlobPtrType StartBlobsPtrType StartDataPtrType StartReplicationCmd StartupPacket StartupStatusEnum StatEntry StatExtEntry StatMsgType StateFileChunk StatisticExtInfo Stats StatsData StatsExtInfo StdAnalyzeData StdRdOptions Step StopList StopWorkersData StrategyNumber StreamCtl StringInfo StringInfoData StripnullState SubLink SubLinkType SubPlan SubPlanState SubTransactionId SubXactCallback SubXactCallbackItem SubXactEvent SubplanResultRelHashElem SubqueryScan SubqueryScanPath SubqueryScanState SubscriptingRef SubscriptingRefState Subscription SubscriptionInfo SubscriptionRelState SupportRequestCost SupportRequestIndexCondition SupportRequestRows SupportRequestSelectivity SupportRequestSimplify Syn SyncOps SyncRepConfigData SyncRequestType SysScanDesc SyscacheCallbackFunction SystemRowsSamplerData SystemSamplerData SystemTimeSamplerData TAR_MEMBER TBMIterateResult TBMIteratingState TBMIterator TBMSharedIterator TBMSharedIteratorState TBMStatus TBlockState TIDBitmap TM_FailureData TM_Result TOKEN_DEFAULT_DACL TOKEN_INFORMATION_CLASS TOKEN_PRIVILEGES TOKEN_USER TParser TParserCharTest TParserPosition TParserSpecial TParserState TParserStateAction TParserStateActionItem TQueueDestReceiver TRGM TSAnyCacheEntry TSConfigCacheEntry TSConfigInfo TSDictInfo TSDictionaryCacheEntry TSExecuteCallback TSLexeme TSParserCacheEntry TSParserInfo TSQuery TSQueryData TSQueryParserState TSQuerySign TSReadPointer TSTemplateInfo TSTokenTypeStorage TSVector TSVectorBuildState TSVectorData TSVectorParseState TSVectorStat TState TStoreState TYPCATEGORY T_Action T_WorkerStatus TabStatHashEntry TabStatusArray TableAmRoutine TableDataInfo TableFunc TableFuncRoutine TableFuncScan TableFuncScanState TableInfo TableLikeClause TableSampleClause TableScanDesc TableScanDescData TableSpaceCacheEntry TableSpaceOpts TablespaceList TablespaceListCell TapeBlockTrailer TapeShare TarMethodData TarMethodFile TargetEntry TclExceptionNameMap Tcl_DString Tcl_FileProc Tcl_HashEntry Tcl_HashTable Tcl_Interp Tcl_NotifierProcs Tcl_Obj Tcl_Time TestDecodingData TestSpec TextFreq TextPositionState TheLexeme TheSubstitute TidExpr TidHashKey TidPath TidScan TidScanState TimeADT TimeLineHistoryCmd TimeLineHistoryEntry TimeLineID TimeOffset TimeStamp TimeTzADT TimeZoneAbbrevTable TimeoutId TimeoutType Timestamp TimestampTz TmFromChar TmToChar TocEntry TokenAuxData TokenizedLine TrackItem TransInvalidationInfo TransState TransactionId TransactionState TransactionStateData TransactionStmt TransactionStmtKind TransformInfo TransformJsonStringValuesState TransitionCaptureState TrgmArc TrgmArcInfo TrgmBound TrgmColor TrgmColorInfo TrgmNFA TrgmPackArcInfo TrgmPackedArc TrgmPackedGraph TrgmPackedState TrgmPrefix TrgmState TrgmStateKey TrieChar Trigger TriggerData TriggerDesc TriggerEvent TriggerFlags TriggerInfo TriggerTransition TruncateStmt TsmRoutine TupOutputState TupSortStatus TupStoreStatus TupleConstr TupleConversionMap TupleDesc TupleHashEntry TupleHashEntryData TupleHashIterator TupleHashTable TupleQueueReader TupleTableSlot TupleTableSlotOps TuplesortInstrumentation TuplesortMethod TuplesortSpaceType Tuplesortstate Tuplestorestate TwoPhaseCallback TwoPhaseFileHeader TwoPhaseLockRecord TwoPhasePgStatRecord TwoPhasePredicateLockRecord TwoPhasePredicateRecord TwoPhasePredicateRecordType TwoPhasePredicateXactRecord TwoPhaseRecordOnDisk TwoPhaseRmgrId TwoPhaseStateData TxidEpoch TxidSnapshot Type TypeCacheEntry TypeCacheEnumData TypeCast TypeCat TypeFuncClass TypeInfo TypeName U U32 U8 UChar UCharIterator UColAttribute UColAttributeValue UCollator UConverter UErrorCode UINT ULARGE_INTEGER ULONG ULONG_PTR UV UVersionInfo Unique UniquePath UniquePathMethod UniqueState UnlistenStmt UnresolvedTup UnresolvedTupData UpdateStmt UpperRelationKind UpperUniquePath UserAuth UserMapping UserOpts VacAttrStats VacAttrStatsP VacOptTernaryValue VacuumParams VacuumRelation VacuumStmt ValidateIndexState Value ValuesScan ValuesScanState Var VarBit VarChar VarParamState VarString VarStringSortSupport Variable VariableAssignHook VariableCache VariableCacheData VariableSetKind VariableSetStmt VariableShowStmt VariableSpace VariableStatData VariableSubstituteHook VersionedQuery Vfd ViewCheckOption ViewOptions ViewStmt VirtualTransactionId VirtualTupleTableSlot Vsrt WAITORTIMERCALLBACK WAIT_ORDER WALInsertLock WALInsertLockPadded WCHAR WCOKind WFW_WaitOption WIDGET WIN32_FILE_ATTRIBUTE_DATA WORD WORKSTATE WSABUF WSADATA WSANETWORKEVENTS WSAPROTOCOL_INFO WaitEvent WaitEventActivity WaitEventClient WaitEventIO WaitEventIPC WaitEventSet WaitEventTimeout WaitPMResult WalCloseMethod WalLevel WalRcvData WalRcvExecResult WalRcvExecStatus WalRcvState WalRcvStreamOptions WalReceiverConn WalReceiverFunctionsType WalSnd WalSndCtlData WalSndSendDataCallback WalSndState WalTimeSample WalWriteMethod Walfile WindowAgg WindowAggPath WindowAggState WindowClause WindowClauseSortData WindowDef WindowFunc WindowFuncExprState WindowFuncLists WindowObject WindowObjectData WindowStatePerAgg WindowStatePerAggData WindowStatePerFunc WithCheckOption WithClause WordEntry WordEntryIN WordEntryPos WordEntryPosVector WordEntryPosVector1 WorkTableScan WorkTableScanState WorkerInfo WorkerInfoData WorkerInstrumentation WorkerJobDumpPtrType WorkerJobRestorePtrType Working_State WriteBufPtrType WriteBytePtrType WriteDataPtrType WriteExtraTocPtrType WriteFunc WritebackContext X509 X509_EXTENSION X509_NAME X509_NAME_ENTRY X509_STORE X509_STORE_CTX XLTW_Oper XLogCtlData XLogCtlInsert XLogDumpConfig XLogDumpPrivate XLogDumpStats XLogLongPageHeader XLogLongPageHeaderData XLogPageHeader XLogPageHeaderData XLogPageReadCB XLogPageReadPrivate XLogReaderState XLogRecData XLogRecPtr XLogRecord XLogRecordBlockCompressHeader XLogRecordBlockHeader XLogRecordBlockImageHeader XLogRecordBuffer XLogRedoAction XLogSegNo XLogSource XLogwrtResult XLogwrtRqst XPVIV XPVMG XactCallback XactCallbackItem XactEvent XactLockTableWaitInfo XidHorizonPrefetchState XidStatus XmlExpr XmlExprOp XmlOptionType XmlSerialize XmlTableBuilderData YYLTYPE YYSTYPE YY_BUFFER_STATE _SPI_connection _SPI_plan __AssignProcessToJobObject __CreateJobObject __CreateRestrictedToken __IsProcessInJob __QueryInformationJobObject __RegisterWaitForSingleObject __SetInformationJobObject _resultmap _stringlist abs acquireLocksOnSubLinks_context adjust_appendrel_attrs_context allocfunc ambeginscan_function ambuild_function ambuildempty_function ambuildphasename_function ambulkdelete_function amcanreturn_function amcostestimate_function amendscan_function amestimateparallelscan_function amgetbitmap_function amgettuple_function aminitparallelscan_function aminsert_function ammarkpos_function amoptions_function amparallelrescan_function amproperty_function amrescan_function amrestrpos_function amvacuumcleanup_function amvalidate_function array_iter array_unnest_fctx assign_collations_context autovac_table av_relation avl_dbase avl_node avl_tree avw_dbase backslashResult base_yy_extra_type basebackup_options bgworker_main_type binaryheap binaryheap_comparator bitmapword bits16 bits32 bits8 bloom_filter brin_column_state bytea cached_re_str cashKEY cfp check_agg_arguments_context check_function_callback check_network_data check_object_relabel_type check_password_hook_type check_ungrouped_columns_context chr clock_t cmpEntriesArg cmpfunc codes_t coercion collation_cache_entry color colormaprange config_var_value contain_aggs_of_level_context convert_testexpr_context copy_data_source_cb core_YYSTYPE core_yy_extra_type core_yyscan_t corrupt_items cost_qual_eval_context create_upper_paths_hook_type createdb_failure_params crosstab_HashEnt crosstab_cat_desc datapagemap_iterator_t datapagemap_t dateKEY datetkn dce_uuid_t decimal deparse_columns deparse_context deparse_expr_cxt deparse_namespace destructor dev_t digit directory_fctx disassembledLeaf dlist_head dlist_iter dlist_mutable_iter dlist_node ds_state dsa_area dsa_area_control dsa_area_pool dsa_area_span dsa_handle dsa_pointer dsa_pointer_atomic dsa_segment_header dsa_segment_index dsa_segment_map dshash_compare_function dshash_hash dshash_hash_function dshash_parameters dshash_partition dshash_table dshash_table_control dshash_table_handle dshash_table_item dsm_control_header dsm_control_item dsm_handle dsm_op dsm_segment dsm_segment_detach_callback eLogType ean13 eary ec_matches_callback_type ec_member_foreign_arg ec_member_matches_arg emit_log_hook_type eval_const_expressions_context event_trigger_command_tag_check_result event_trigger_support_data exec_thread_arg execution_state explain_get_index_name_hook_type f_smgr fd_set fe_scram_state fe_scram_state_enum file_action_t file_entry_t file_type_t filemap_t finalize_primnode_context find_dependent_phvs_context find_expr_references_context fix_join_expr_context fix_scan_expr_context fix_upper_expr_context flatten_join_alias_vars_context float4 float4KEY float8 float8KEY floating_decimal_32 floating_decimal_64 fmAggrefPtr fmExprContextCallbackFunction fmNodePtr fmStringInfo fmgr_hook_type foreign_glob_cxt foreign_loc_cxt freeaddrinfo_ptr_t freefunc fsec_t gbt_vsrt_arg gbtree_ninfo gbtree_vinfo generate_series_fctx generate_series_numeric_fctx generate_series_timestamp_fctx generate_series_timestamptz_fctx generate_subscripts_fctx get_agg_clause_costs_context get_attavgwidth_hook_type get_index_stats_hook_type get_relation_info_hook_type get_relation_stats_hook_type getaddrinfo_ptr_t getnameinfo_ptr_t gid_t gin_leafpage_items_state ginxlogCreatePostingTree ginxlogDeleteListPages ginxlogDeletePage ginxlogInsert ginxlogInsertDataInternal ginxlogInsertEntry ginxlogInsertListPage ginxlogRecompressDataLeaf ginxlogSplit ginxlogUpdateMeta ginxlogVacuumDataLeafPage gistxlogDelete gistxlogPage gistxlogPageDelete gistxlogPageReuse gistxlogPageSplit gistxlogPageUpdate grouping_sets_data gseg_picksplit_item gss_buffer_desc gss_cred_id_t gss_ctx_id_t gss_name_t gtrgm_consistent_cache gzFile hashfunc hbaPort heap_page_items_state help_handler hlCheck hstoreCheckKeyLen_t hstoreCheckValLen_t hstorePairs_t hstoreUniquePairs_t hstoreUpgrade_t hyperLogLogState ifState ilist import_error_callback_arg indexed_tlist inet inetKEY inet_struct init_function inline_cte_walker_context inline_error_callback_arg ino_t inquiry instr_time int128 int16 int16KEY int2vector int32 int32KEY int32_t int64 int64KEY int8 internalPQconninfoOption intptr_t intset_internal_node intset_leaf_node intset_node intvKEY itemIdSort itemIdSortData iterator jmp_buf join_search_hook_type json_aelem_action json_ofield_action json_scalar_action json_struct_action keyEntryData key_t lclContext lclTocEntry leafSegmentInfo leaf_item line_t lineno_t list_qsort_comparator locale_t locate_agg_of_level_context locate_var_of_level_context locate_windowfunc_context logstreamer_param lquery lquery_level lquery_variant ltree ltree_gist ltree_level ltxtquery mXactCacheEnt mac8KEY macKEY macaddr macaddr8 macaddr_sortsupport_state map_variable_attnos_context max_parallel_hazard_context mb2wchar_with_len_converter mbcharacter_incrementer mbdisplaylen_converter mblen_converter mbverifier md5_ctxt metastring mix_data_t mixedStruct mode_t movedb_failure_params mp_digit mp_int mp_result mp_sign mp_size mp_small mp_usmall mp_word mpz_t mxact mxtruncinfo needs_fmgr_hook_type nodeitem normal_rand_fctx ntile_context numeric object_access_hook_type off_t oidKEY oidvector on_dsm_detach_callback on_exit_nicely_callback ossl_EVP_cipher_func other output_type pagetable_hash pagetable_iterator pairingheap pairingheap_comparator pairingheap_node parallel_worker_main_type parse_error_callback_arg pendingPosition pgParameterStatus pg_atomic_flag pg_atomic_uint32 pg_atomic_uint64 pg_conn_host pg_conn_host_type pg_conv_map pg_crc32 pg_crc32c pg_ctype_cache pg_enc pg_enc2gettext pg_enc2name pg_encname pg_gssinfo pg_int64 pg_local_to_utf_combined pg_locale_t pg_mb_radix_tree pg_on_exit_callback pg_re_flags pg_saslprep_rc pg_sha224_ctx pg_sha256_ctx pg_sha384_ctx pg_sha512_ctx pg_stack_base_t pg_time_t pg_tz pg_tz_cache pg_tzenum pg_unicode_decomposition pg_utf_to_local_combined pg_uuid_t pg_wc_probefunc pg_wchar pg_wchar_tbl pgp_armor_headers_state pgpid_t pgsocket pgsql_thing_t pgssEntry pgssHashKey pgssJumbleState pgssLocationLen pgssSharedState pgssVersion pgstat_page pgstattuple_type pgthreadlock_t pid_t pivot_field planner_hook_type plperl_array_info plperl_call_data plperl_interp_desc plperl_proc_desc plperl_proc_key plperl_proc_ptr plperl_query_desc plperl_query_entry plpgsql_CastHashEntry plpgsql_CastHashKey plpgsql_HashEnt pltcl_call_state pltcl_interp_desc pltcl_proc_desc pltcl_proc_key pltcl_proc_ptr pltcl_query_desc pointer pos_trgm post_parse_analyze_hook_type pqbool pqsigfunc printQueryOpt printTableContent printTableFooter printTableOpt printTextFormat printTextLineFormat printTextLineWrap printTextRule printfunc priv_map process_file_callback_t process_sublinks_context proclist_head proclist_mutable_iter proclist_node promptStatus_t pthread_attr_t pthread_key_t pthread_mutex_t pthread_once_t pthread_t ptrdiff_t pull_var_clause_context pull_varattnos_context pull_varnos_context pull_vars_context pullup_replace_vars_context pushdown_safety_info qsort_arg_comparator query_pathkeys_callback radius_attribute radius_packet rangeTableEntry_used_context rank_context rbt_allocfunc rbt_combiner rbt_comparator rbt_freefunc reduce_outer_joins_state reference regex_arc_t regex_t regexp regexp_matches_ctx registered_buffer regmatch_t regoff_t regproc relopt_bool relopt_gen relopt_int relopt_kind relopt_parse_elt relopt_real relopt_string relopt_type relopt_value remoteConn remoteConnHashEnt remoteDep rendezvousHashEntry replace_rte_variables_callback replace_rte_variables_context rewrite_event rijndael_ctx rm_detail_t role_auth_extra row_security_policy_hook_type save_buffer scram_HMAC_ctx scram_state scram_state_enum sem_t sequence_magic set_join_pathlist_hook_type set_rel_pathlist_hook_type shm_mq shm_mq_handle shm_mq_iovec shm_mq_result shm_toc shm_toc_entry shm_toc_estimator shmem_startup_hook_type sig_atomic_t sigjmp_buf signedbitmapword sigset_t size_t slist_head slist_iter slist_mutable_iter slist_node slock_t socket_set spgBulkDeleteState spgChooseIn spgChooseOut spgChooseResultType spgConfigIn spgConfigOut spgInnerConsistentIn spgInnerConsistentOut spgLeafConsistentIn spgLeafConsistentOut spgNodePtr spgPickSplitIn spgPickSplitOut spgVacPendingItem spgxlogAddLeaf spgxlogAddNode spgxlogMoveLeafs spgxlogPickSplit spgxlogSplitTuple spgxlogState spgxlogVacuumLeaf spgxlogVacuumRedirect spgxlogVacuumRoot split_pathtarget_context split_pathtarget_item sql_error_callback_arg sqlparseInfo sqlparseState ss_lru_item_t ss_scan_location_t ss_scan_locations_t ssize_t standard_qp_extra stemmer_module stmtCacheEntry storeInfo storeRes_func stream_stop_callback string substitute_actual_parameters_context substitute_actual_srf_parameters_context substitute_phv_relids_context svtype symbol tablespaceinfo teReqs teSection temp_tablespaces_extra test_function test_shm_mq_header test_spec text timeKEY time_t timeout_handler_proc timeout_params timerCA tlist_vinfo toast_compress_header transferMode transfer_thread_arg trgm trgm_mb_char trivalue tsKEY ts_db_fctx ts_parserstate ts_tokenizer ts_tokentype tsearch_readline_state tuplehash_hash tuplehash_iterator txid type tzEntry u1byte u4byte u_char u_int uchr uid_t uint128 uint16 uint16_t uint32 uint32_t uint64 uint64_t uint8 uint8_t uintptr_t unicodeStyleBorderFormat unicodeStyleColumnFormat unicodeStyleFormat unicodeStyleRowFormat unicode_linestyle unit_conversion unlogged_relation_entry utf_local_conversion_func uuidKEY uuid_rc_t uuid_sortsupport_state uuid_t va_list vacuumingOptions validate_string_relopt varatt_expanded varattrib_1b varattrib_1b_e varattrib_4b vbits walrcv_check_conninfo_fn walrcv_connect_fn walrcv_create_slot_fn walrcv_disconnect_fn walrcv_endstreaming_fn walrcv_exec_fn walrcv_get_conninfo_fn walrcv_get_senderinfo_fn walrcv_identify_system_fn walrcv_readtimelinehistoryfile_fn walrcv_receive_fn walrcv_send_fn walrcv_server_version_fn walrcv_startstreaming_fn wchar2mb_with_len_converter wchar_t win32_deadchild_waitinfo win32_pthread wint_t worker_state worktable wrap xl_brin_createidx xl_brin_desummarize xl_brin_insert xl_brin_revmap_extend xl_brin_samepage_update xl_brin_update xl_btree_delete xl_btree_insert xl_btree_mark_page_halfdead xl_btree_metadata xl_btree_newroot xl_btree_reuse_page xl_btree_split xl_btree_unlink_page xl_btree_vacuum xl_clog_truncate xl_commit_ts_set xl_commit_ts_truncate xl_dbase_create_rec xl_dbase_drop_rec xl_end_of_recovery xl_hash_add_ovfl_page xl_hash_delete xl_hash_init_bitmap_page xl_hash_init_meta_page xl_hash_insert xl_hash_move_page_contents xl_hash_split_allocate_page xl_hash_split_complete xl_hash_squeeze_page xl_hash_update_meta_page xl_hash_vacuum_one_page xl_heap_clean xl_heap_cleanup_info xl_heap_confirm xl_heap_delete xl_heap_freeze_page xl_heap_freeze_tuple xl_heap_header xl_heap_inplace xl_heap_insert xl_heap_lock xl_heap_lock_updated xl_heap_multi_insert xl_heap_new_cid xl_heap_rewrite_mapping xl_heap_truncate xl_heap_update xl_heap_visible xl_invalid_page xl_invalid_page_key xl_invalidations xl_logical_message xl_multi_insert_tuple xl_multixact_create xl_multixact_truncate xl_parameter_change xl_relmap_update xl_replorigin_drop xl_replorigin_set xl_restore_point xl_running_xacts xl_seq_rec xl_smgr_create xl_smgr_truncate xl_standby_lock xl_standby_locks xl_tblspc_create_rec xl_tblspc_drop_rec xl_xact_abort xl_xact_assignment xl_xact_commit xl_xact_dbinfo xl_xact_invals xl_xact_origin xl_xact_parsed_abort xl_xact_parsed_commit xl_xact_parsed_prepare xl_xact_relfilenodes xl_xact_subxacts xl_xact_twophase xl_xact_xinfo xmlBuffer xmlBufferPtr xmlChar xmlDocPtr xmlErrorPtr xmlExternalEntityLoader xmlGenericErrorFunc xmlNodePtr xmlNodeSetPtr xmlParserCtxtPtr xmlParserInputPtr xmlStructuredErrorFunc xmlTextWriter xmlTextWriterPtr xmlXPathCompExprPtr xmlXPathContextPtr xmlXPathObjectPtr xmltype xpath_workspace xsltSecurityPrefsPtr xsltStylesheetPtr xsltTransformContextPtr yy_parser yy_size_t yyscan_t z_stream z_streamp zic_t