Skip to main content

Performance tuning PostgreSQL(Part-II) - Basic Server Configuration

In part(I)  Performance tuning PostgreSQL - Basic query tuning concepts  , I had written on some basic ideas related to improve query performance. In this post, I have written about some basic postgresql.conf parameters, which we should know while configure server.

autovacuum Whenever rows in a PostgreSQL table are updated or deleted, dead rows are left behind. VACUUM gets rid of them so that the space can be reused. If a table doesn’t get vacuumed, it will get bloated, which wastes disk space and slows down sequential table scans (and – to a smaller extent – index scans).
VACUUM also takes care of freezing table rows so to avoid problems when the transaction ID counter wraps around.
SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum
FROM pg_stat_all_tables
ORDER BY n_dead_tup
    / (n_live_tup
       * current_setting('autovacuum_vacuum_scale_factor')::float8
          + current_setting('autovacuum_vacuum_threshold')::float8)
     DESC
LIMIT 10;
If your bloated table does not show up here, n_dead_tup is zero and last_autovacuum is NULL, you might have a problem with the statistics collector.
If the bloated table is right there on top, but last_autovacuum is NULL, you might need to configure autovacuum to be more aggressive so that it gets done with the table. 
shared_buffers The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. A good rule of thumb for this in Linux is to set the configuration parameter shared_buffers to 1/4 of RAM. This is not a hard-and-fast rule, but rather a good starting point for tuning a server. If your database is only 2GB and you have a 16GB server,4GB of shared buffers is actually overkill.
Memory architecture in PostgreSQL can be classified into two broad categories:
Local memory area – allocated by each backend process for its own use.
Shared memory area – used by all processes of a PostgreSQL server.
Local Memory Area : Each backend process allocates a local memory area for query processing; each area is divided into several sub-areas – whose sizes are either fixed or variable.
work_mem    Executor uses this area for sorting tuples by ORDER BY and DISTINCT operations, and for joining tables by merge-join and hash-join operations.If you do a lot of complex sorts, and have a lot of memory, then increasing the work_mem parameter allows PostgreSQL to do larger in-memory sorts which, unsurprisingly, will be faster than disk-based equivalents. This size is applied to each and every sort done by each user, and complex queries can use multiple working memory sort buffers. Set it to 50MB, and have 30 users submitting queries, and you are soon using 1.5GB of real memory. Furthermore, if a query involves doing merge sorts of 8 tables, that requires 8 times work_mem. You need to consider what you set max_connections to in order to size this parameter correctly. This is a setting where data warehouse systems, where users are submitting very large queries, can readily make use of many gigabytes of memory 
maintenance_work_mem 
Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. It defaults to 64 megabytes (64MB). Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps. 
temp_buffers Executor uses this area for storing temporary tables.A shared memory area is allocated by a PostgreSQL server when it starts up. This area is also divided into several fix sized sub-areas.
shared buffer pool PostgreSQL loads pages within tables and indexes from a persistent storage to here, and operates them directly. 
WAL buffer To ensure that no data has been lost by server failures, PostgreSQL supports the WAL mechanism. WAL data (also referred to as XLOG records) are transaction log in PostgreSQL; and WAL buffer is a buffering area of the WAL data before writing to a persistent storage. 
commit log Commit Log(CLOG) keeps the states of all transactions (e.g., in_progress, committed,aborted) for Concurrency Control (CC) mechanism. 
max_connections max_connections sets exactly that: the maximum number of client connections allowed. This is very important to some of the below parameters (particularly work_mem) because there are some memory resources that are or can be allocated on a per-client basis, so the maximum number of clients suggests the maximum possible memory use. Generally, PostgreSQL on good hardware can support a few hundred connections. If you want to have thousands instead, you should consider using connection pooling software to reduce the connection overhead. 
effective_cache_size effective_cache_size should be set to an estimate of how much memory is available for disk caching by the operating system and within the database itself, after taking into account what's used by the OS itself and other applications. This is a guideline for how much memory you expect to be available in the OS and PostgreSQL buffer caches, not an allocation! This value is used only by the PostgreSQL query planner to figure out whether plans it's considering would be expected to fit in RAM or not. If it's set too low, indexes may not be used for executing queries the way you'd expect. The setting for shared_buffers is not taken into account here--only the effective_cache_size value is, so it should include memory dedicated to the database too.
Setting effective_cache_size to 1/2 of total memory would be a normal conservative setting, and 3/4 of memory is a more aggressive but still reasonable amount. You might find a better estimate by looking at your operating system's statistics. On UNIX-like systems, add the free + cached numbers from free or top to get an estimate. On Windows see the "System Cache" size in the Windows Task Manager's Performance tab. Changing this setting does not require restarting the database (HUP is enough). 
checkpoint_segments checkpoint_completion_target 
You need to tune checkpoint_timeout and max_wal_size. First you need to understand about it.
        checkpoint_timeout = 5min
        max_wal_size = 1GB (before PostgreSQL 9.5 this was checkpoint_segments)
With these (default) values, PostgreSQL will trigger a CHECKPOINT every 5 minutes, or after the WAL grows to about 1GB on disk.
Checkpoint calculation
SELECT pg_current_xlog_insert_location(); -- " 1EB/32F59FA8" [after 5(five) minutes]
SELECT pg_current_xlog_insert_location();--"1EB/3316D8B0"
SELECT pg_xlog_location_diff(‘1EB/3316D8B0’, 1'EB/32F59FA8')
select ((1845614240/1024)/1024) mb
This shows that over the 5 minutes, the database generated around 1.8GB of WAL,
so for checkpoint_timeout = 20min that would be about 8GB of WAL. However as mentioned before, max_wal_size is a quota for 2 – 3 checkpoints combined,
so max_wal_size = 24GB (3 x 8GB) seems to be right.
PostgreSQL writes new transactions to the database in files called WAL segments that are 16MB in size. Every time checkpoint_segments worth of these files have been written, by default 3, a checkpoint occurs. Checkpoints can be resource intensive, and on a modern system doing one every 48MB will be a serious performance bottleneck. Setting checkpoint_segments to a much larger value improves that. Unless you're running on a very small configuration, you'll almost certainly be better setting this to at least 10, which also allows usefully increasing the completion target.
For more write-heavy systems, values from 32 (checkpoint every 512MB) to 256 (every 4GB) are popular nowadays. Very large settings use a lot more disk and will cause your database to take longer to recover, so make sure you're comfortable with both those things before large increases. Normally the large settings (>64/1GB) are only used for bulk loading. Note that whatever you choose for the segments, you'll still get a checkpoint at least every 5 minutes unless you also increase checkpoint_timeout (which isn't necessary on most systems). 
Checkpoint writes are spread out a bit while the system starts working toward the next checkpoint.You can spread those writes out further, lowering the average write overhead, by increasing the checkpoint_completion_target parameter to its useful maximum of 0.9 (aim to finish by the time 90% of the next checkpoint is here) rather than the default of 0.5 (aim to finish when the next one is 50% done). A setting of 0 gives something similar to the behavior of obsolete versions. The main reason the default isn't just 0.9 is that you need a larger checkpoint_segments value than the default for broader spreading to work well. 
default_statistics_target The database software collects statistics about each of the tables in your database to decide how to execute queries against it. If you're not getting good execution query plans particularly on larger (or more varied) tables you should increase default_statistics_target then ANALYZE the database again (or wait for autovacuum to do it for you).
log_temp_files Can be used to log sorts, hashes, and temp files which can be useful in figuring out if sorts are spilling to disk instead of fitting in memory. You can see sorts spilling to disk using EXPLAIN ANALYZE plans as well. For example, if you see a line like Sort Method: external merge Disk: 7526kB in the output of EXPLAIN ANALYZE, a work_mem of at least 8MB would keep the intermediate data in memory and likely improve the query response time. 
fsync (boolean)If this parameter is on, the PostgreSQL server will try to make sure that updates are physically written to disk, by issuing fsync() system calls or various equivalent methods (see wal_sync_method). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash.
wal_sync_method wal_buffers After every transaction, PostgreSQL forces a commit to disk out to its write-ahead log. This can be done a couple of ways, and on some platforms the other options are considerably faster than the conservative default. open_sync is the most common non-default setting switched to, on platforms that support it but default to one of the fsync methods.Increasing wal_buffers from its tiny default of a small number of kilobytes is helpful for write-heavy systems. Benchmarking generally suggests that just increasing to 1MB is enough for some large systems, and given the amount of RAM in modern servers allocating a full WAL segment is reasonable. Changing wal_buffers requires a database restart. 
max_prepared_transactions This setting is used for managing 2 phase commit. If you do not use two phase commit (and if you don't know what it is, you don't use it), then you can set this value to 0. That will save a little bit of shared memory. For database systems with a large number (at least hundreds) of concurrent connections, be aware that this setting also affects the number of available lock-slots in pg_locks, so you may want to leave it at the default setting.There is a formula for how much memory gets allocated in the docs and in the default postgresql.conf.Changing max_prepared_transactions requires a server restart.
random_page_cost This setting suggests to the optimizer how long it will take your disks to seek to a random disk page, as a multiple of how long a sequential read (with a cost of 1.0)takes.If you have particularly fast disks, as commonly found with RAID arrays of SCSI disks, it may be appropriate to lower random_page_cost, which will encourage the query optimizer to use random access index scans. Note that random_page_cost is pretty far down this list (at the end in fact). If you are getting bad plans, this shouldn't be the first thing you look at, even though lowering this value may be effective. Instead, you should start by making sure autovacuum is working properly, that you are collecting enough statistics, and that you have correctly sized the memory parameters for your server--all the things gone over above. After you've done all those much more important things, if you're still getting bad plans then you should see if lowering random_page_cost is still useful.If synchronous_standby_names is set, this parameter also controls whether or not transaction commits will wait for the transaction's WAL records to be replicated to the standby server. When set to on, commits will wait until a reply from the current synchronous standby indicates it has received the commit record of the transaction and flushed it to disk. This ensures the transaction will not be lost unless both primary and standby suffer corruption of their database storage. 
full_page_writes (boolean) When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint. This is needed because a page write that is in process during an operating system crash might be only partially completed, leading to an on-disk page that contains a mix of old and new data. The row-level change data normally stored in WAL will not be enough to completely restore such a page during post-crash recovery. Storing the full page image guarantees that the page can be correctly restored, but at the price of increasing the amount of data that must be written to WAL. one way to reduce the cost of full-page writes is to increase the checkpoint interval parameters. 
wal_compression (boolean) When this parameter is on, the PostgreSQL server compresses a full page image written to WAL when full_page_writes is on or during a base backup. A compressed page image will be decompressed during WAL replay. The default value is off. Only superusers can change this setting.Turning this parameter on can reduce the WAL volume without increasing the risk of unrecoverable data corruption, but at the cost of some extra CPU spent on the compression during WAL logging and on the decompression during WAL replay. 
wal_buffers (integer) The amount of shared memory used for WAL data that has not yet been written to disk. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers, but not less than 64kB nor more than the size of one WAL segment, typically 16MB.Setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases. 
wal_writer_delay (integer) Specifies the delay between activity rounds for the WAL writer. In each round the writer will flush WAL to disk. It then sleeps for wal_writer_delay milliseconds, and repeats. The default value is 200 milliseconds (200ms).
commit_delay (integer) commit_delay adds a time delay, measured in microseconds, before a WAL flush is initiated. This can improve group commit throughput by allowing a larger number of transactions to commit via a single WAL flush, if system load is high enough that additional transactions become ready to commit within the given interval. 
commit_siblings (integer)    Minimum number of concurrent open transactions to require before performing the commit_delay delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions. 
checkpoint_timeout (integer) Maximum time between automatic WAL checkpoints, in seconds. The valid range is between 30 seconds and one hour. The default is five minutes (5min). Increasing this parameter can increase the amount of time needed for crash recovery. 
checkpoint_warning (integer)Write a message to the server log if checkpoints caused by the filling of checkpoint segment files happen closer together than this many seconds (which suggests that max_wal_size ought to be raised). The default is 30 seconds (30s). Zero disables the warning. No warnings will be generated if checkpoint_timeout is less than checkpoint_warning. 
max_wal_size (integer) Maximum size to let the WAL grow to between automatic WAL checkpoints. This is a soft limit; WAL size can exceed max_wal_size under special circumstances, like under heavy load, a failing archive_command, or a high wal_keep_segments setting. The default is 1 GB. Increasing this parameter can increase the amount of time needed for crash recovery. 
min_wal_size (integer) As long as WAL disk usage stays below this setting, old WAL files are always recycled for future use at a checkpoint, rather than removed. This can be used to ensure that enough WAL space is reserved to handle spikes in WAL usage, for example when running large batch jobs. The default is 80 MB. 
max_stack_depth (integer) Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines such as expression evaluation. The default setting is two megabytes (2MB), which is conservatively small and unlikely to risk crashes. However, it might be too small to allow execution of complex functions. Only superusers can change this setting. Setting max_stack_depth higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process. On platforms where PostgreSQL can determine the kernel limit, the server will not allow this variable to be set to an unsafe value. However, not all platforms provide the information, so caution is recommended in selecting a value.

Some below points we have to take care when doing some DDL or DML on server 
Disable Autocommit When using multiple INSERTs, turn off auto commit and just do one commit at the end. (In plain SQL, this means issuing BEGIN at the start and COMMIT at the end. Some client libraries might do this behind your back, in which case you need to make sure the library does it when you want it done.) If you allow each insertion to be committed separately, PostgreSQL is doing a lot of work for each row that is added. An additional benefit of doing all insertions in one transaction is that if the insertion of  one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data. 
Use COPY to load all the rows in one command, instead of using a series of INSERT commands. The COPY command is optimized for loading large numbers of rows; it is less flexible than INSERT, but incurs significantly less overhead for large data loads. Since COPY is a single command, there is no need to disable autocommit if you use this method to populate a table. 
Remove Indexes If you are loading a freshly created table, the fastest method is to create the table, bulk load the table's data using COPY, then create any indexes needed for the table. Creating an index on pre-existing data is quicker than updating it incrementally as each row is loaded.If you are adding large amounts of data to an existing table, it might be a win to drop the indexes, load the table, and then recreate the indexes. 
Remove Foreign Key ConstraintsJust as with indexes, a foreign key constraint can be checked “in bulk” more efficiently than row-byrow. So it might be useful to drop foreign key constraints, load data, and re-create the constraints. Again, there is a trade-off between data load speed and loss of error checking while the constraint is missing. 
Increase maintenance_work_mem Temporarily increasing the maintenance_work_mem configuration variable when loading large amounts of data can lead to improved performance. This will help to speed up CREATE INDEX commands and ALTER TABLE ADD FOREIGN KEY commands. 
Increase max_wal_size Temporarily increasing the max_wal_size configuration variable can also make large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configuration variable). Whenever a checkpoint occurs, all dirty pages must be flushed to disk. By increasing max_wal_size temporarily during bulk data loads, the number of checkpoints that are required can be reduced. 
Run ANALYZE Afterwards Whenever you have significantly altered the distribution of data within a table, running ANALYZE is strongly recommended. This includes bulk loading large amounts of data into the table. Running ANALYZE (or VACUUM ANALYZE) ensures that the planner has up-to-date statistics about the table. With no statistics or obsolete statistics, the planner might make poor decisions during query planning, leading to poor performance on any tables with inaccurate or nonexistent statistics.




Comments

  1. Good knowledge in nutshell..Thank u Sir

    ReplyDelete
  2. Nice article by Mr. Prakash sinha...
    Easey explanation with proper examples...
    Thank you for posting...

    ReplyDelete
  3. Good Post sir. Thanks for sharing.

    ReplyDelete
  4. improve PostgreSQL performance


    improve PostgreSQL performance NadeemAsrar's - Get the Postgres database consulting solutions service for improving PostgreSQL performance tuning. NadeemAsrar is a senior database technologist for database software management.


    Nadeemasrar.com

    ReplyDelete
  5. If you want to know that How to find database username and password in phpMyAdmin then click this link for more information:
    How to find database username and password in phpMyAdmin

    ReplyDelete
  6. BetMGM Casino, Hotel, Spa & Conference Center - KT Hub
    BetMGM Casino, Hotel, 광양 출장마사지 Spa & 충주 출장샵 Conference Center is located in the beautiful Las Vegas Strip, within a 15-minute drive of 영주 출장안마 MGM 여수 출장샵 National Harbor. 밀양 출장마사지 The BetMGM

    ReplyDelete

Post a Comment

Popular posts from this blog

SSL configuration in PostgreSQL for PgAdmin

SSL configuration in PostgreSQL for PgAdmin This document describes how to set up SSL to enable ssl connections from PgAdmin on some client machine to postgresql on a server machine. In my example, I am using Centos 7 for PostgreSql-9.5 server and Windows for PgAdmin.The assumption is that Postgresql (compiled with ssl support) and openssl are already installed on the server (Linux Centos7). PgAdmin is already installed on the client (either Windows or Linux). On the server, three certificates are required in the data directory. root.crt (trusted root certificate) server.crt (server certificate) server.key (private key)  On CentOS7 default Data directory is / var/lib/pgsql/9.5/data/ : Now, connect your system with user root. Step 1 Su – Go to data directory [root@localhost ~]# cd /var/lib/pgsql/9.5/data/ Out put [root@ localhost data]# Step 2 : Generate a private key (you must provide a passphrase). [root@ localhost data]# ope...

Partition and Subpartition in PostgreSQL12

Partitioning refers to splitting what is logically one large table into smaller physical pieces. Partitioning can provide several benefits: • Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions.  • When queries or updates access a large percentage of a single partition, performance can be improved by taking advantage of sequential scan of that partition instead of using an index and random access reads scattered across the whole table. • Bulk loads and deletes can be accomplished by adding or removing partitions, if that requirement is planned into the partitioning design. Doing ALTER TABLE DETACH PARTITION or dropping an individual partition using DROP TABLE is far faster than a bulk operation. These commands also entirely avoid the VACUUM overhead caused by a bulk DELETE. • Seldom-used data can be migrated to cheaper and slower...

PostgreSQL Foreign Data Wrapper for Oracle on Windows OS

The foreign data wrapper is responsible for fetching data from the remote data source and returning it to the PostgreSQL executor. If wrapper must support in updating foreign tables. In this post, I'll demonstrate, how you may connect PostgreSQL database to Oracle database using oracle_fdw . Before going ahead kindly ensure that PostgreSQL is installed in your system . Let’s begin: Step-1 Software’s used in demo - Windows, PostgreSQL 9.5, 9.6   instantclient-basic-win-x86-64-11.2.0.1.0 instantclient-jdbc-win-x86-64-11.2.0.1.0   instantclient-odbc-win-x86-64-11.2.0.1.0 instantclient-sqlplus-win-x86-64-11.2.0.1.0 & Oracle_FDW as per version ( http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html   , https://pgxn.org/dist/oracle_fdw/ or version wise)   Download all the files mentioned above, in a single folder and unzip entire files to this single folder. Suppose, we create a folder Oracle in c: drive ...