This is my fifth blog on postgreSQL topics. This blog is collection of important extensions mostly used in postgreSQL.
Extensions were implemented in PostgreSQL 9.1 to allow for easier packaging of additions to PostgreSQL. Extensions can package user-visible functions or use hooks in the PostgreSQL to modify how the database does certain processes.
Create extension You can create an extension by using below syntax:
CREATE EXTENSION [ IF NOT EXISTS ] extension_name
[ WITH ] [ SCHEMA schema_name ]
[ VERSION version ]
[ FROM old_version ]
e.g. CREATE EXTENSION adminpack;
Remove extension
You can remove an extension with drop commands:
DROP EXTENSION [ IF EXISTS ] extension_name [, ...] [ CASCADE | RESTRICT ]
DROP EXTENSION removes extensions from the database. Dropping an extension causes its component objects to be dropped as well.
You must own the extension to use DROP EXTENSION.
e.g. DROP EXTENSION adminpack;
Alter extension: Change the definition of an extension
ALTER EXTENSION extension_name UPDATE [ TO new_version ]
ALTER EXTENSION extension_name SET SCHEMA new_schema
ALTER EXTENSION extension_name ADD member_object
ALTER EXTENSION extension_name DROP member_object
e.g.
To update the hstore extension to version 2.0:
ALTER EXTENSION hstore UPDATE TO '2.0';
To change the schema of the hstore extension to utils:
ALTER EXTENSION hstore SET SCHEMA utils;
To add an existing function to the hstore extension:
ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore);
Some useful extensions which are mostly used in PostgeSQL.
adminpack provides a number of support functions which pgAdmin and other administration and management tools can use to provide additional functionality, such as remote management of server log files. Use of all these functions is restricted to superusers.
auto_explain The auto_explain module provides a means for logging execution plans of slow statements automatically, without having to run EXPLAIN by hand. This is especially helpful for tracking down un-optimized queries in large applications.
To use it, simply load it into the server. You can load it into an individual session:
LOAD 'auto_explain';
(You must be superuser to do that.) More typical usage is to preload it into some or all sessions by including auto_explain in session_preload_libraries or shared_preload_libraries in postgresql.conf. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that.
There are several configuration parameters that control the behavior of auto_explain. Note that the default behavior is to do nothing, so you must set at least auto_explain.log_min_duration if you want any results.
auto_explain.log_min_duration (integer)
auto_explain.log_min_duration is the minimum statement execution time, in milliseconds, that will cause the statement's plan to be logged. Setting this to zero logs all plans. Minus-one (the default) disables logging of plans. For example, if you set it to 250ms then all statements that run 250ms or longer will be logged. Only superusers can change this setting.
auto_explain.log_analyze (boolean)
auto_explain.log_buffers (boolean)
auto_explain.log_timing (boolean)
auto_explain.log_triggers (boolean)
auto_explain.log_verbose (boolean)
auto_explain.log_format (enum)
auto_explain.log_format selects the EXPLAIN output format to be used. The allowed values are text, xml, json, and yaml. The default is text. Only superuser can change this setting.
auto_explain.log_nested_statements
(boolean)auto_explain.log_nested_statements causes nested statements (statements executed inside a function) to be considered for logging. When it is off, only top-level query plans are logged. This parameter is off by default. Only superusers can change this setting.
btree_gin btree_gin provides sample GIN operator classes that implement B-tree equivalent behavior for the data types int2, int4, int8, float4, float8, timestamp with time zone, timestamp without time zone, time with time zone, time without time zone, date, interval, oid, money, "char", varchar, text, bytea, bit, varbit, macaddr, macaddr8, inet, cidr, uuid, name, bool, bpchar, and all enum types.
CREATE TABLE test (a int4);
-- create index
CREATE INDEX testidx ON test USING GIN (a);
-- query
SELECT * FROM test WHERE a < 10;
btree_gist btree_gist provides GiST index operator classes that implement B-tree equivalent behavior for the data types int2, int4, int8, float4, float8, numeric, timestamp with time zone, timestamp without time zone, time with time zone, time without time zone, date, interval, oid, money, char, varchar, text, bytea, bit, varbit, macaddr, inet, and cidr.
Simple example using btree_gist instead of btree:
CREATE TABLE test (a int4);
-- create index
CREATE INDEX testidx ON test USING GIST (a);
-- query
SELECT * FROM test WHERE a < 10;
-- nearest-neighbor search: find the ten entries closest to "42"
SELECT *, a <-> 42 AS dist FROM test ORDER BY a <-> 42 LIMIT 10;
Use an exclusion constraint to enforce the rule that a cage at a zoo can contain only one kind of animal:
CREATE TABLE zoo (
cage INTEGER,
animal TEXT,
EXCLUDE USING GIST (cage WITH =, animal WITH <>)
);
INSERT INTO zoo VALUES(123, 'zebra');
INSERT 0 1
INSERT INTO zoo VALUES(123, 'zebra');
INSERT 0 1
INSERT INTO zoo VALUES(123, 'lion');
ERROR: conflicting key value violates exclusion constraint "zoo_cage_animal_excl"
DETAIL: Key (cage, animal)=(123, lion) conflicts with existing key (cage, animal)=(123, zebra).
INSERT INTO zoo VALUES(124, 'lion');
INSERT 0 1
dblink is a PostgreSQL contrib extension that allows users to connect to other databases and to run arbitrary queries in them.
dblink(text connname, text sql [, bool fail_on_error]) returns setof record
dblink(text connstr, text sql [, bool fail_on_error]) returns setof record
dblink(text sql [, bool fail_on_error]) returns setof record
e.g.
CREATE EXTENSION dblink;
***its for supper user
SELECT dblink_connect('host=192..x.x.x dbname=postgres user= postgres port=5432');
department
***uses it for its for nonsupper user
SELECT dblink_connect_u('host=192.x.x.x dbname=postgres user= postgres port=5434');
SELECT dblink_exec('insert into test values(1,1,"test");');
Other options of dblink
dblink_connect -- opens a persistent connection to a remote database
dblink_connect_u -- opens a persistent connection to a remote database, insecurely
dblink_disconnect -- closes a persistent connection to a remote database
dblink -- executes a query in a remote database
dblink_exec -- executes a command in a remote database
dblink_open -- opens a cursor in a remote database
dblink_fetch -- returns rows from an open cursor in a remote database
dblink_close -- closes a cursor in a remote database
dblink_get_connections -- returns the names of all open named dblink connections
dblink_error_message -- gets last error message on the named connection
dblink_send_query -- sends an async query to a remote database
dblink_is_busy -- checks if connection is busy with an async query
dblink_get_notify -- retrieve async notifications on a connection
dblink_get_result -- gets an async query result
dblink_cancel_query -- cancels any active query on the named connection
dblink_get_pkey -- returns the positions and field names of a relation's primary key fields
dblink_build_sql_insert -- builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_delete -- builds a DELETE statement using supplied values for primary key field values
dblink_build_sql_update -- builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
file_fdw The file_fdw module provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system, or to execute programs on the server and read their output. The data file or program output must be in a format that can be read by COPY FROM; see COPY for details. Access to data files is currently read-only.
e.g.
Create a Foreign Table for PostgreSQL CSV Logs
CREATE EXTENSION file_fdw;
Then create a foreign server:
CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;
Now you are ready to create the foreign data table. Using the CREATE FOREIGN TABLE command, you will need to define the columns for the table, the CSV file name, and its format:
CREATE FOREIGN TABLE pglog (
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text
) SERVER pglog OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );
That's it , now you can query your log directly. In production, of course, you would need to define some way to deal with log rotation.
fuzzystrmatch The fuzzystrmatch module provides several functions to determine similarities and distance between strings. At present, the soundex, metaphone, dmetaphone, and dmetaphone_alt functions do not work well with multibyte encodings (such as UTF-8).
The Soundex system is a method of matching similar-sounding names by converting them to the same code.
e.g.
SELECT soundex('hello world!');
SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann');
SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew');
SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret');
pg_buffercache The pg_buffercache module provides a means for examining what's happening in the shared buffer cache in real time.
The module provides a C function pg_buffercache_pages that returns a set of records, plus a view
pg_buffercache that wraps the function for convenient use.
By default public access is revoked from both of these, just in case there are security issues lurking.
pg_buffercache Columns
e.g. SELECT c.relname, count(*) AS buffers
FROM pg_buffercache b INNER JOIN pg_class c
ON b.relfilenode = pg_relation_filenode(c.oid) AND
b.reldatabase IN (0, (SELECT oid FROM pg_database
WHERE datname = current_database()))
GROUP BY c.relname
ORDER BY 2 DESC
LIMIT 10;
There is one row for each buffer in the shared cache. Unused buffers are shown with all fields null except bufferid. Shared system catalogs are shown as belonging to database zero.
pgcrypto The pgcrypto module provides cryptographic functions for PostgreSQL.
Through this extension we can encrypt or decrypt out data with different algorithm.
freespacemap The pg_freespacemap module provides a means for examining the free space map (FSM). It provides a function called pg_freespace, or two overloaded functions, to be precise. The functions show the value recorded in the free space map for a given page, or for all pages in the relation.
e.g. create extension pg_freespacemap ;
SELECT * FROM pg_freespace(‘tablename');
tablefunc functions that manipulate whole tables, including crosstab(pivot)
Foreign Data Wrappers (FDWs) The foreign data wrapper is responsible for fetching data from the remote data source and returning it to the PostgreSQL executor.
To prepare for remote access using fdw:
Install the extension using CREATE EXTENSION.
Create a foreign server object, using CREATE SERVER, to represent each remote database you want to connect to. Specify connection information, except user and password, as options of the server object.
Create a user mapping, using CREATE USER MAPPING, for each database user you want to allow to access each foreign server. Specify the remote user name and password to use as user and password options of the user mapping.
Create a foreign table, using CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA, for each remote table you want to access. The columns of the foreign table must match the referenced remote table. You can, however, use table and/or column names different from the remote table's, if you specify the correct remote names as options of the foreign table object.
There are many foreign Data Wrappers some of them are as below:
• postgres_fdw – connects to Oracle databases.
• oracle_fdw – connects to Oracle databases.
• mysql_fdw – connects to MySQL databases.
• tds_fdw – connects to Sybase and Microsoft SQL Server databases.
• odbc_fdw – connects to any ODBC source.
• couchdb_fdw – connects to CouchDB databases.
• redis_fdw – connects to Redis databases.
• twitter_fdw – fetches messages from Twitter.
• ldap_fdw – queries LDAP servers.
• file_fdw – access data files in the server’s file system and query them as tables.
• PGStrom – uses GPU devices to accelerate sequential scan on massive amount of records with complex qualifiers.
• s3_fdw – reads files located in Amazon S3.
• www_fdw – accesses web services as a data source.
You can find extensions list from below select query
SELECT * FROM pg_available_extensions
Create extension You can create an extension by using below syntax:
CREATE EXTENSION [ IF NOT EXISTS ] extension_name
[ WITH ] [ SCHEMA schema_name ]
[ VERSION version ]
[ FROM old_version ]
e.g. CREATE EXTENSION adminpack;
Remove extension
You can remove an extension with drop commands:
DROP EXTENSION [ IF EXISTS ] extension_name [, ...] [ CASCADE | RESTRICT ]
DROP EXTENSION removes extensions from the database. Dropping an extension causes its component objects to be dropped as well.
You must own the extension to use DROP EXTENSION.
e.g. DROP EXTENSION adminpack;
Alter extension: Change the definition of an extension
ALTER EXTENSION extension_name UPDATE [ TO new_version ]
ALTER EXTENSION extension_name SET SCHEMA new_schema
ALTER EXTENSION extension_name ADD member_object
ALTER EXTENSION extension_name DROP member_object
e.g.
To update the hstore extension to version 2.0:
ALTER EXTENSION hstore UPDATE TO '2.0';
To change the schema of the hstore extension to utils:
ALTER EXTENSION hstore SET SCHEMA utils;
To add an existing function to the hstore extension:
ALTER EXTENSION hstore ADD FUNCTION populate_record(anyelement, hstore);
Some useful extensions which are mostly used in PostgeSQL.
adminpack provides a number of support functions which pgAdmin and other administration and management tools can use to provide additional functionality, such as remote management of server log files. Use of all these functions is restricted to superusers.
auto_explain The auto_explain module provides a means for logging execution plans of slow statements automatically, without having to run EXPLAIN by hand. This is especially helpful for tracking down un-optimized queries in large applications.
To use it, simply load it into the server. You can load it into an individual session:
LOAD 'auto_explain';
(You must be superuser to do that.) More typical usage is to preload it into some or all sessions by including auto_explain in session_preload_libraries or shared_preload_libraries in postgresql.conf. Then you can track unexpectedly slow queries no matter when they happen. Of course there is a price in overhead for that.
There are several configuration parameters that control the behavior of auto_explain. Note that the default behavior is to do nothing, so you must set at least auto_explain.log_min_duration if you want any results.
auto_explain.log_min_duration (integer)
auto_explain.log_min_duration is the minimum statement execution time, in milliseconds, that will cause the statement's plan to be logged. Setting this to zero logs all plans. Minus-one (the default) disables logging of plans. For example, if you set it to 250ms then all statements that run 250ms or longer will be logged. Only superusers can change this setting.
auto_explain.log_analyze (boolean)
auto_explain.log_buffers (boolean)
auto_explain.log_timing (boolean)
auto_explain.log_triggers (boolean)
auto_explain.log_verbose (boolean)
auto_explain.log_format (enum)
auto_explain.log_format selects the EXPLAIN output format to be used. The allowed values are text, xml, json, and yaml. The default is text. Only superuser can change this setting.
auto_explain.log_nested_statements
(boolean)auto_explain.log_nested_statements causes nested statements (statements executed inside a function) to be considered for logging. When it is off, only top-level query plans are logged. This parameter is off by default. Only superusers can change this setting.
btree_gin btree_gin provides sample GIN operator classes that implement B-tree equivalent behavior for the data types int2, int4, int8, float4, float8, timestamp with time zone, timestamp without time zone, time with time zone, time without time zone, date, interval, oid, money, "char", varchar, text, bytea, bit, varbit, macaddr, macaddr8, inet, cidr, uuid, name, bool, bpchar, and all enum types.
CREATE TABLE test (a int4);
-- create index
CREATE INDEX testidx ON test USING GIN (a);
-- query
SELECT * FROM test WHERE a < 10;
btree_gist btree_gist provides GiST index operator classes that implement B-tree equivalent behavior for the data types int2, int4, int8, float4, float8, numeric, timestamp with time zone, timestamp without time zone, time with time zone, time without time zone, date, interval, oid, money, char, varchar, text, bytea, bit, varbit, macaddr, inet, and cidr.
Simple example using btree_gist instead of btree:
CREATE TABLE test (a int4);
-- create index
CREATE INDEX testidx ON test USING GIST (a);
-- query
SELECT * FROM test WHERE a < 10;
-- nearest-neighbor search: find the ten entries closest to "42"
SELECT *, a <-> 42 AS dist FROM test ORDER BY a <-> 42 LIMIT 10;
Use an exclusion constraint to enforce the rule that a cage at a zoo can contain only one kind of animal:
CREATE TABLE zoo (
cage INTEGER,
animal TEXT,
EXCLUDE USING GIST (cage WITH =, animal WITH <>)
);
INSERT INTO zoo VALUES(123, 'zebra');
INSERT 0 1
INSERT INTO zoo VALUES(123, 'zebra');
INSERT 0 1
INSERT INTO zoo VALUES(123, 'lion');
ERROR: conflicting key value violates exclusion constraint "zoo_cage_animal_excl"
DETAIL: Key (cage, animal)=(123, lion) conflicts with existing key (cage, animal)=(123, zebra).
INSERT INTO zoo VALUES(124, 'lion');
INSERT 0 1
dblink is a PostgreSQL contrib extension that allows users to connect to other databases and to run arbitrary queries in them.
dblink(text connname, text sql [, bool fail_on_error]) returns setof record
dblink(text connstr, text sql [, bool fail_on_error]) returns setof record
dblink(text sql [, bool fail_on_error]) returns setof record
e.g.
CREATE EXTENSION dblink;
***its for supper user
SELECT dblink_connect('host=192..x.x.x dbname=postgres user= postgres port=5432');
department
***uses it for its for nonsupper user
SELECT dblink_connect_u('host=192.x.x.x dbname=postgres user= postgres port=5434');
SELECT dblink_exec('insert into test values(1,1,"test");');
Other options of dblink
dblink_connect -- opens a persistent connection to a remote database
dblink_connect_u -- opens a persistent connection to a remote database, insecurely
dblink_disconnect -- closes a persistent connection to a remote database
dblink -- executes a query in a remote database
dblink_exec -- executes a command in a remote database
dblink_open -- opens a cursor in a remote database
dblink_fetch -- returns rows from an open cursor in a remote database
dblink_close -- closes a cursor in a remote database
dblink_get_connections -- returns the names of all open named dblink connections
dblink_error_message -- gets last error message on the named connection
dblink_send_query -- sends an async query to a remote database
dblink_is_busy -- checks if connection is busy with an async query
dblink_get_notify -- retrieve async notifications on a connection
dblink_get_result -- gets an async query result
dblink_cancel_query -- cancels any active query on the named connection
dblink_get_pkey -- returns the positions and field names of a relation's primary key fields
dblink_build_sql_insert -- builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_delete -- builds a DELETE statement using supplied values for primary key field values
dblink_build_sql_update -- builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
file_fdw The file_fdw module provides the foreign-data wrapper file_fdw, which can be used to access data files in the server's file system, or to execute programs on the server and read their output. The data file or program output must be in a format that can be read by COPY FROM; see COPY for details. Access to data files is currently read-only.
e.g.
Create a Foreign Table for PostgreSQL CSV Logs
CREATE EXTENSION file_fdw;
Then create a foreign server:
CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;
Now you are ready to create the foreign data table. Using the CREATE FOREIGN TABLE command, you will need to define the columns for the table, the CSV file name, and its format:
CREATE FOREIGN TABLE pglog (
log_time timestamp(3) with time zone,
user_name text,
database_name text,
process_id integer,
connection_from text,
session_id text,
session_line_num bigint,
command_tag text,
session_start_time timestamp with time zone,
virtual_transaction_id text,
transaction_id bigint,
error_severity text,
sql_state_code text,
message text,
detail text,
hint text,
internal_query text,
internal_query_pos integer,
context text,
query text,
query_pos integer,
location text,
application_name text
) SERVER pglog OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );
That's it , now you can query your log directly. In production, of course, you would need to define some way to deal with log rotation.
fuzzystrmatch The fuzzystrmatch module provides several functions to determine similarities and distance between strings. At present, the soundex, metaphone, dmetaphone, and dmetaphone_alt functions do not work well with multibyte encodings (such as UTF-8).
The Soundex system is a method of matching similar-sounding names by converting them to the same code.
e.g.
SELECT soundex('hello world!');
SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann');
SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew');
SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret');
pg_buffercache The pg_buffercache module provides a means for examining what's happening in the shared buffer cache in real time.
The module provides a C function pg_buffercache_pages that returns a set of records, plus a view
pg_buffercache that wraps the function for convenient use.
By default public access is revoked from both of these, just in case there are security issues lurking.
pg_buffercache Columns
Name
|
Type
|
References
|
Description
|
bufferid
|
integer
|
ID, in the range
1..shared_buffers
|
|
relfilenode
|
oid
|
pg_class.relfilenode
|
Filenode number of the
relation
|
reltablespace
|
oid
|
pg_tablespace.oid
|
Tablespace OID of the
relation
|
reldatabase
|
oid
|
pg_database.oid
|
Database OID of the
relation
|
relforknumber
|
smallint
|
Fork number within the
relation; see include/common/relpath.h
|
|
relblocknumber
|
bigint
|
Page number within the
relation
|
|
isdirty
|
boolean
|
Is the page dirty?
|
|
usagecount
|
smallint
|
Clock-sweep access count
|
|
pinning_backends
|
integer
|
Number of backends
pinning this buffer
|
FROM pg_buffercache b INNER JOIN pg_class c
ON b.relfilenode = pg_relation_filenode(c.oid) AND
b.reldatabase IN (0, (SELECT oid FROM pg_database
WHERE datname = current_database()))
GROUP BY c.relname
ORDER BY 2 DESC
LIMIT 10;
There is one row for each buffer in the shared cache. Unused buffers are shown with all fields null except bufferid. Shared system catalogs are shown as belonging to database zero.
pgcrypto The pgcrypto module provides cryptographic functions for PostgreSQL.
Through this extension we can encrypt or decrypt out data with different algorithm.
freespacemap The pg_freespacemap module provides a means for examining the free space map (FSM). It provides a function called pg_freespace, or two overloaded functions, to be precise. The functions show the value recorded in the free space map for a given page, or for all pages in the relation.
e.g. create extension pg_freespacemap ;
SELECT * FROM pg_freespace(‘tablename');
tablefunc functions that manipulate whole tables, including crosstab(pivot)
Foreign Data Wrappers (FDWs) The foreign data wrapper is responsible for fetching data from the remote data source and returning it to the PostgreSQL executor.
To prepare for remote access using fdw:
Install the extension using CREATE EXTENSION.
Create a foreign server object, using CREATE SERVER, to represent each remote database you want to connect to. Specify connection information, except user and password, as options of the server object.
Create a user mapping, using CREATE USER MAPPING, for each database user you want to allow to access each foreign server. Specify the remote user name and password to use as user and password options of the user mapping.
Create a foreign table, using CREATE FOREIGN TABLE or IMPORT FOREIGN SCHEMA, for each remote table you want to access. The columns of the foreign table must match the referenced remote table. You can, however, use table and/or column names different from the remote table's, if you specify the correct remote names as options of the foreign table object.
There are many foreign Data Wrappers some of them are as below:
• postgres_fdw – connects to Oracle databases.
• oracle_fdw – connects to Oracle databases.
• mysql_fdw – connects to MySQL databases.
• tds_fdw – connects to Sybase and Microsoft SQL Server databases.
• odbc_fdw – connects to any ODBC source.
• couchdb_fdw – connects to CouchDB databases.
• redis_fdw – connects to Redis databases.
• twitter_fdw – fetches messages from Twitter.
• ldap_fdw – queries LDAP servers.
• file_fdw – access data files in the server’s file system and query them as tables.
• PGStrom – uses GPU devices to accelerate sequential scan on massive amount of records with complex qualifiers.
• s3_fdw – reads files located in Amazon S3.
• www_fdw – accesses web services as a data source.
You can find extensions list from below select query
SELECT * FROM pg_available_extensions
Sr No.
|
name
|
default_version
|
comment
|
1
|
adminpack
|
1
|
administrative functions
for PostgreSQL
|
2
|
autoinc
|
1
|
functions for
autoincrementing fields
|
3
|
btree_gin
|
1
|
support for indexing
common datatypes in GIN
|
4
|
btree_gist
|
1.1
|
support for indexing
common datatypes in GiST
|
5
|
chkpass
|
1
|
data type for
auto-encrypted passwords
|
6
|
citext
|
1.1
|
data type for
case-insensitive character strings
|
7
|
cube
|
1
|
data type for
multidimensional cubes
|
8
|
dblink
|
1.1
|
connect to other
PostgreSQL databases from within a database
|
9
|
dict_int
|
1
|
text search dictionary
template for integers
|
10
|
dict_xsyn
|
1
|
text search dictionary
template for extended synonym processing
|
11
|
earthdistance
|
1
|
calculate great-circle
distances on the surface of the Earth
|
12
|
file_fdw
|
1
|
foreign-data wrapper for
flat file access
|
13
|
fuzzystrmatch
|
1
|
determine similarities
and distance between strings
|
14
|
hstore
|
1.3
|
data type for storing
sets of (key, value) pairs
|
15
|
hstore_plperl
|
1
|
transform between hstore
and plperl
|
16
|
hstore_plperlu
|
1
|
transform between hstore
and plperlu
|
17
|
hstore_plpython2u
|
1
|
transform between hstore
and plpython2u
|
18
|
hstore_plpython3u
|
1
|
transform between hstore
and plpython3u
|
19
|
hstore_plpythonu
|
1
|
transform between hstore
and plpythonu
|
20
|
insert_username
|
1
|
functions for tracking
who changed a table
|
21
|
intagg
|
1
|
integer aggregator and
enumerator (obsolete)
|
22
|
intarray
|
1
|
functions, operators, and
index support for 1-D arrays of integers
|
23
|
isn
|
1
|
data types for
international product numbering standards
|
24
|
lo
|
1
|
Large Object maintenance
|
25
|
ltree
|
1
|
data type for
hierarchical tree-like structures
|
26
|
ltree_plpython2u
|
1
|
transform between ltree
and plpython2u
|
27
|
ltree_plpython3u
|
1
|
transform between ltree
and plpython3u
|
28
|
ltree_plpythonu
|
1
|
transform between ltree
and plpythonu
|
29
|
moddatetime
|
1
|
functions for tracking
last modification time
|
30
|
pageinspect
|
1.3
|
inspect the contents of
database pages at a low level
|
31
|
pgcrypto
|
1.2
|
cryptographic functions
|
32
|
pgrowlocks
|
1.1
|
show row-level locking
information
|
33
|
pgstattuple
|
1.3
|
show tuple-level
statistics
|
34
|
pg_buffercache
|
1.1
|
examine the shared buffer
cache
|
35
|
pg_freespacemap
|
1
|
examine the free space
map (FSM)
|
36
|
pg_prewarm
|
1
|
prewarm relation data
|
37
|
pg_stat_statements
|
1.3
|
track execution
statistics of all SQL statements executed
|
38
|
pg_trgm
|
1.1
|
text similarity
measurement and index searching based on trigrams
|
39
|
pldbgapi
|
1.1
|
server-side support for
debugging PL/pgSQL functions
|
40
|
plperl
|
1
|
PL/Perl procedural
language
|
41
|
plperlu
|
1
|
PL/PerlU untrusted
procedural language
|
42
|
plpgsql
|
1
|
PL/pgSQL procedural language
|
43
|
plpython2u
|
1
|
PL/Python2U untrusted
procedural language
|
44
|
plpython3u
|
1
|
PL/Python3U untrusted
procedural language
|
45
|
plpythonu
|
1
|
PL/PythonU untrusted
procedural language
|
46
|
pltcl
|
1
|
PL/Tcl procedural
language
|
47
|
pltclu
|
1
|
PL/TclU untrusted procedural
language
|
48
|
postgres_fdw
|
1
|
foreign-data wrapper for
remote PostgreSQL servers
|
49
|
refint
|
1
|
functions for
implementing referential integrity (obsolete)
|
50
|
seg
|
1
|
data type for
representing line segments or floating-point intervals
|
51
|
sslinfo
|
1
|
information about SSL
certificates
|
52
|
tablefunc
|
1
|
functions that manipulate
whole tables, including crosstab
|
53
|
tcn
|
1
|
Triggered change
notifications
|
54
|
timetravel
|
1
|
functions for
implementing time travel
|
55
|
tsearch2
|
1
|
compatibility package for
pre-8.3 text search functions
|
56
|
tsm_system_rows
|
1
|
TABLESAMPLE method which
accepts number of rows as a limit
|
57
|
tsm_system_time
|
1
|
TABLESAMPLE method which
accepts time in milliseconds as a limit
|
58
|
unaccent
|
1
|
text search dictionary
that removes accents
|
59
|
uuid-ossp
|
1
|
generate universally
unique identifiers (UUIDs)
|
60
|
xml2
|
1
|
XPath querying and XSLT
|
Note: To
prepare this I take references from postgreSQL document.
Very good blog for performance tuning & others.
ReplyDeleteKiran S Rodge
Solid article.Keep it up.
ReplyDeleteThanks for share sir.
ReplyDeleteGrt information in nutshell.... Thank you sir.....
ReplyDelete