Param Templates
Use Pigsty’s built-in Patroni config templates or customize your own
Pigsty provides four preset Patroni/PostgreSQL config templates optimized for different workloads:
Use pg_conf to select a template; default is /docs/pgsql/template/oltp.yml.
The database tuning template pg_conf should be paired with the OS tuning template node_tune.
Usage
Set pg_conf in your cluster definition.
It’s recommended to set node_tune accordingly for OS-level tuning:
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-test
pg_conf: oltp.yml # PostgreSQL config template (default)
node_tune: oltp # OS tuning template (default)
For critical financial workloads, use /docs/pgsql/template/crit.yml:
pg-finance:
hosts:
10.10.10.21: { pg_seq: 1, pg_role: primary }
10.10.10.22: { pg_seq: 2, pg_role: replica }
10.10.10.23: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-finance
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
For low-spec VMs or dev environments, use /docs/pgsql/template/tiny.yml:
pg-dev:
hosts:
10.10.10.31: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-dev
pg_conf: tiny.yml # PostgreSQL tiny template
node_tune: tiny # OS tiny tuning
Comparison
The four templates differ significantly in key parameters:
Connections & Memory
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| max_connections | 500/1000 | 500 | 500/1000 | 250 |
| work_mem range | 64MB-1GB | 64MB-8GB | 64MB-1GB | 16MB-256MB |
| maintenance_work_mem | 25% shmem | 50% shmem | 25% shmem | 25% shmem |
| max_locks_per_transaction | 1-2x maxconn | 2-4x maxconn | 1-2x maxconn | 1-2x maxconn |
Parallel Query
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| max_worker_processes | cpu+8 | cpu+12 | cpu+8 | cpu+4 |
| max_parallel_workers | 50% cpu | 80% cpu | 50% cpu | 50% cpu |
| max_parallel_workers_per_gather | 20% cpu (max 8) | 50% cpu | 0 (off) | 0 (off) |
| parallel_setup_cost | 2000 | 1000 | 2000 | 1000 |
| parallel_tuple_cost | 0.2 | 0.1 | 0.2 | 0.1 |
Sync Replication
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| synchronous_mode | depends pg_rpo | depends pg_rpo | forced on | depends pg_rpo |
| data_checksums | optional | optional | forced on | optional |
Vacuum Config
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| vacuum_cost_delay | 20ms | 10ms | 20ms | 20ms |
| vacuum_cost_limit | 2000 | 10000 | 2000 | 2000 |
| autovacuum_max_workers | 3 | 3 | 3 | 2 |
Timeout & Security
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| idle_in_transaction_session_timeout | 10min | off | 1min | 10min |
| log_min_duration_statement | 100ms | 1000ms | 100ms | 100ms |
| default_statistics_target | 400 | 1000 | 400 | 200 |
| track_activity_query_size | 8KB | 8KB | 32KB | 8KB |
| log_connections | auth | auth | full | default |
IO Config (PG17+)
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| io_workers | 25% cpu (4-16) | 50% cpu (4-32) | 25% cpu (4-8) | 3 |
| temp_file_limit | 1/20 disk | 1/5 disk | 1/20 disk | 1/20 disk |
Selection Guide
OLTP Template: Default choice for most transaction processing. Ideal for e-commerce, social, gaming apps.
OLAP Template: For data warehouses, BI reports, ETL. Allows large queries, high parallelism, relaxed timeouts.
CRIT Template: For financial transactions, core accounting with strict consistency/security requirements. Forced sync replication, checksums, full audit.
TINY Template: For dev/test environments, resource-constrained VMs, Raspberry Pi. Minimizes resource usage, disables parallel queries.
Custom Templates
Create custom templates based on existing ones. Templates are in roles/pgsql/templates/:
roles/pgsql/templates/
├── oltp.yml # OLTP template (default)
├── olap.yml # OLAP template
├── crit.yml # CRIT critical template
└── tiny.yml # TINY micro template
Steps to create a custom template:
- Copy an existing template as base
- Modify parameters as needed
- Place in
roles/pgsql/templates/ - Reference via
pg_conf
Example:
cp roles/pgsql/templates/oltp.yml roles/pgsql/templates/myapp.yml
# Edit myapp.yml as needed
Then use in your cluster:
pg-myapp:
vars:
pg_conf: myapp.yml
Templates use Jinja2 syntax; parameters are dynamically computed based on node resources (CPU, memory, disk).
Tuning Strategy
For technical details on template parameter optimization, see Tuning Strategy:
- Memory tuning (shared buffers, work mem, max connections)
- CPU tuning (parallel query worker config)
- Storage tuning (WAL size, temp file limits)
- Manual parameter adjustment
1 - Parameter Optimization Policy
Learn the parameter optimization strategies Pigsty uses for the 4 different PostgreSQL workload scenarios.
Pigsty provides four scenario-based parameter templates by default, which can be specified and used through the pg_conf parameter.
tiny.yml: Optimized for small nodes, VMs, and small demos (1-8 cores, 1-16GB)oltp.yml: Optimized for OLTP workloads and latency-sensitive applications (4C8GB+) (default template)olap.yml: Optimized for OLAP workloads and throughput (4C8G+)crit.yml: Optimized for data consistency and critical applications (4C8G+)
Pigsty adopts different parameter optimization strategies for these four default scenarios, as shown below:
Memory Parameter Tuning
Pigsty automatically detects the system’s memory size and uses it as the basis for setting the maximum number of connections and memory-related parameters.
pg_max_conn: PostgreSQL maximum connections, auto will use recommended values for different scenariospg_shared_buffer_ratio: Shared buffer memory ratio, default is 0.25
By default, Pigsty uses 25% of memory as PostgreSQL shared buffers, with the remaining 75% as the operating system cache.
By default, if the user has not set a pg_max_conn maximum connections value, Pigsty will use defaults according to the following rules:
- oltp: 500 (pgbouncer) / 1000 (postgres)
- crit: 500 (pgbouncer) / 1000 (postgres)
- tiny: 300
- olap: 300
For OLTP and CRIT templates, if the service is not pointing to the pgbouncer connection pool but directly connects to the postgres database, the maximum connections will be doubled to 1000.
After determining the maximum connections, work_mem is calculated from shared memory size / maximum connections and limited to the range of 64MB ~ 1GB.
{% raw %}
{% if pg_max_conn != 'auto' and pg_max_conn|int >= 20 %}{% set pg_max_connections = pg_max_conn|int %}{% else %}{% if pg_default_service_dest|default('postgres') == 'pgbouncer' %}{% set pg_max_connections = 500 %}{% else %}{% set pg_max_connections = 1000 %}{% endif %}{% endif %}
{% set pg_max_prepared_transactions = pg_max_connections if 'citus' in pg_libs else 0 %}
{% set pg_max_locks_per_transaction = (2 * pg_max_connections)|int if 'citus' in pg_libs or 'timescaledb' in pg_libs else pg_max_connections %}
{% set pg_shared_buffers = (node_mem_mb|int * pg_shared_buffer_ratio|float) | round(0, 'ceil') | int %}
{% set pg_maintenance_mem = (pg_shared_buffers|int * 0.25)|round(0, 'ceil')|int %}
{% set pg_effective_cache_size = node_mem_mb|int - pg_shared_buffers|int %}
{% set pg_workmem = ([ ([ (pg_shared_buffers / pg_max_connections)|round(0,'floor')|int , 64 ])|max|int , 1024])|min|int %}
{% endraw %}
CPU Parameter Tuning
In PostgreSQL, there are 4 important parameters related to parallel queries. Pigsty automatically optimizes parameters based on the current system’s CPU cores.
In all strategies, the total number of parallel processes (total budget) is usually set to CPU cores + 8, with a minimum of 16, to reserve enough background workers for logical replication and extensions. The OLAP and TINY templates vary slightly based on scenarios.
| OLTP | Setting Logic | Range Limits |
|---|
max_worker_processes | max(100% CPU + 8, 16) | CPU cores + 4, minimum 12 |
max_parallel_workers | max(ceil(50% CPU), 2) | 1/2 CPU rounded up, minimum 2 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU rounded up, minimum 2 |
max_parallel_workers_per_gather | min(max(ceil(20% CPU), 2),8) | 1/5 CPU rounded down, minimum 2, max 8 |
| OLAP | Setting Logic | Range Limits |
|---|
max_worker_processes | max(100% CPU + 12, 20) | CPU cores + 12, minimum 20 |
max_parallel_workers | max(ceil(80% CPU, 2)) | 4/5 CPU rounded up, minimum 2 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU rounded up, minimum 2 |
max_parallel_workers_per_gather | max(floor(50% CPU), 2) | 1/2 CPU rounded up, minimum 2 |
| CRIT | Setting Logic | Range Limits |
|---|
max_worker_processes | max(100% CPU + 8, 16) | CPU cores + 8, minimum 16 |
max_parallel_workers | max(ceil(50% CPU), 2) | 1/2 CPU rounded up, minimum 2 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU rounded up, minimum 2 |
max_parallel_workers_per_gather | 0, enable as needed | |
| TINY | Setting Logic | Range Limits |
|---|
max_worker_processes | max(100% CPU + 4, 12) | CPU cores + 4, minimum 12 |
max_parallel_workers | max(ceil(50% CPU) 1) | 50% CPU rounded down, minimum 1 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 1) | 33% CPU rounded down, minimum 1 |
max_parallel_workers_per_gather | 0, enable as needed | |
Note that the CRIT and TINY templates disable parallel queries by setting max_parallel_workers_per_gather = 0.
Users can enable parallel queries as needed by setting this parameter.
Both OLTP and CRIT templates additionally set the following parameters, doubling the parallel query cost to reduce the tendency to use parallel queries.
parallel_setup_cost: 2000 # double from 100 to increase parallel cost
parallel_tuple_cost: 0.2 # double from 0.1 to increase parallel cost
min_parallel_table_scan_size: 32MB # 4x default 8MB, prefer non-parallel scan
min_parallel_index_scan_size: 2MB # 4x default 512kB, prefer non-parallel scan
Note that adjustments to the max_worker_processes parameter only take effect after a restart. Additionally, when a replica’s configuration value for this parameter is higher than the primary’s, the replica will fail to start.
This parameter must be adjusted through Patroni configuration management, which ensures consistent primary-replica configuration and prevents new replicas from failing to start during failover.
Storage Space Parameters
Pigsty automatically detects the total space of the disk where the /data/postgres main data directory is located and uses it as the basis for specifying the following parameters:
{% raw %}
min_wal_size: {{ ([pg_size_twentieth, 200])|min }}GB # 1/20 disk size, max 200GB
max_wal_size: {{ ([pg_size_twentieth * 4, 2000])|min }}GB # 2/10 disk size, max 2000GB
max_slot_wal_keep_size: {{ ([pg_size_twentieth * 6, 3000])|min }}GB # 3/10 disk size, max 3000GB
temp_file_limit: {{ ([pg_size_twentieth, 200])|min }}GB # 1/20 of disk size, max 200GB
{% endraw %}
temp_file_limit defaults to 5% of disk space, capped at 200GB.min_wal_size defaults to 5% of disk space, capped at 200GB.max_wal_size defaults to 20% of disk space, capped at 2TB.max_slot_wal_keep_size defaults to 30% of disk space, capped at 3TB.
As a special case, the OLAP template allows 20% for temp_file_limit, capped at 2TB.
Manual Parameter Tuning
In addition to using Pigsty’s automatically configured parameters, you can also manually tune PostgreSQL parameters.
Use the pg edit-config <cluster> command to interactively edit cluster configuration:
Or use the -p parameter to directly set parameters:
pg edit-config -p log_min_duration_statement=1000 pg-meta
pg edit-config --force -p shared_preload_libraries='timescaledb, pg_cron, pg_stat_statements, auto_explain' pg-meta
You can also use the Patroni REST API to modify configuration:
curl -u 'postgres:Patroni.API' \
-d '{"postgresql":{"parameters": {"log_min_duration_statement":200}}}' \
-s -X PATCH http://10.10.10.10:8008/config | jq .
2 - OLTP Template
PostgreSQL config template optimized for online transaction processing workloads
oltp.yml is Pigsty’s default config template, optimized for online transaction processing (OLTP). Designed for 4-128 core CPUs with high concurrency, low latency, and high throughput.
Pair with node_tune = oltp for OS-level tuning.
Use Cases
OLTP template is ideal for:
- E-commerce: Order processing, inventory, user transactions
- Social apps: User feeds, messaging, following relationships
- Gaming backends: Player data, leaderboards, game state
- SaaS applications: Multi-tenant business systems
- Web apps: CRUD-intensive workloads
Workload characteristics:
- Many short transactions (millisecond-level)
- High concurrent connections (hundreds to thousands)
- Read/write ratio typically 7:3 to 9:1
- Latency-sensitive, requires fast response
- High data consistency requirements
Usage
oltp.yml is the default template, no explicit specification needed:
pg-oltp:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-oltp
# pg_conf: oltp.yml # PostgreSQL config template (default)
# node_tune: oltp # OS tuning template (default)
Or explicitly specify:
pg-oltp:
vars:
pg_conf: oltp.yml # PostgreSQL config template
node_tune: oltp # OS tuning template
Parameter Details
Connection Management
max_connections: 500/1000 # depends on pgbouncer usage
superuser_reserved_connections: 10
- When
pg_default_service_dest is pgbouncer, max_connections is set to 500 - When traffic connects directly to PostgreSQL,
max_connections is set to 1000 - Override via
pg_max_conn parameter
Memory Config
OLTP template memory allocation strategy:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 64MB - 1GB | Based on shared_buffers/max_connections |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation:
work_mem = min(max(shared_buffers / max_connections, 64MB), 1GB)
Ensures each connection has sufficient sort/hash memory without over-allocation.
Parallel Query
OLTP template moderately limits parallel queries to prevent resource contention:
max_worker_processes: cpu + 8 (min 16)
max_parallel_workers: 50% × cpu (min 2)
max_parallel_workers_per_gather: 20% × cpu (2-8)
max_parallel_maintenance_workers: 33% × cpu (min 2)
Parallel cost estimates are increased to favor serial execution:
parallel_setup_cost: 2000 # 2x default (1000)
parallel_tuple_cost: 0.2 # 2x default (0.1)
min_parallel_table_scan_size: 32MB # 4x default (8MB), prefer non-parallel scan
min_parallel_index_scan_size: 2MB # 4x default (512kB), prefer non-parallel scan
WAL Config
min_wal_size: disk/20 (max 200GB)
max_wal_size: disk/5 (max 2000GB)
max_slot_wal_keep_size: disk×3/10 (max 3000GB)
wal_buffers: 16MB
wal_writer_delay: 20ms
wal_writer_flush_after: 1MB
commit_delay: 20
commit_siblings: 10
checkpoint_timeout: 15min
checkpoint_completion_target: 0.80
Balances data safety and write performance.
Vacuum Config
vacuum_cost_delay: 20ms # sleep after each vacuum round
vacuum_cost_limit: 2000 # cost limit per vacuum round
autovacuum_max_workers: 3
autovacuum_naptime: 1min
autovacuum_vacuum_scale_factor: 0.08 # 8% table change triggers vacuum
autovacuum_analyze_scale_factor: 0.04 # 4% table change triggers analyze
autovacuum_freeze_max_age: 1000000000
Conservative vacuum settings avoid impacting online transaction performance.
Query Optimization
random_page_cost: 1.1 # SSD optimized
effective_io_concurrency: 200 # SSD concurrent IO
default_statistics_target: 400 # Statistics precision
Enables planner to generate better query plans.
Logging & Monitoring
log_min_duration_statement: 100 # log queries > 100ms
log_statement: ddl # log DDL statements
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024 # log temp files > 1MB
log_autovacuum_min_duration: 1s
track_io_timing: on
track_functions: all
track_activity_query_size: 8192
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 10min
10-minute idle transaction timeout prevents zombie transactions holding locks.
Extension Config
shared_preload_libraries: 'pg_stat_statements, auto_explain'
# auto_explain
auto_explain.log_min_duration: 1s
auto_explain.log_analyze: on
auto_explain.log_verbose: on
auto_explain.log_timing: on
auto_explain.log_nested_statements: true
# pg_stat_statements
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.track_utility: off
pg_stat_statements.track_planning: off
Template Comparison
| Feature | OLTP | OLAP | CRIT |
|---|
| max_connections | 500-1000 | 500 | 500-1000 |
| work_mem | 64MB-1GB | 64MB-8GB | 64MB-1GB |
| Parallel query | Moderate limit | Aggressive | Disabled |
| Vacuum intensity | Conservative | Aggressive | Conservative |
| Txn timeout | 10min | Disabled | 1min |
| Slow query threshold | 100ms | 1000ms | 100ms |
Why OLTP over OLAP?
- Queries are mostly simple point/range lookups
- Transaction response time requires milliseconds
- High concurrent connections
- No complex analytical queries
Why OLTP over CRIT?
- Small probability of data loss acceptable (async replication)
- Complete audit logs not required
- Better write performance desired
Connection Pooling
For high concurrency, use PgBouncer connection pool:
pg-oltp:
vars:
pg_default_service_dest: pgbouncer # default
pgbouncer_poolmode: transaction # transaction-level pooling
Read Separation
Use read replicas to share read load:
pg-oltp:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
Monitoring Metrics
Focus on these metrics:
- Connections: Active/waiting connection counts
- Transaction rate: TPS, commit/rollback ratio
- Response time: Query latency percentiles (p50/p95/p99)
- Lock waits: Lock wait time, deadlock counts
- Replication lag: Replica delay time and bytes
References
3 - OLAP Template
PostgreSQL config template optimized for online analytical processing workloads
olap.yml is optimized for online analytical processing (OLAP). Designed for 4-128 core CPUs with support for large queries, high parallelism, relaxed timeouts, and aggressive vacuum.
Pair with node_tune = olap for OS-level tuning.
Use Cases
OLAP template is ideal for:
- Data warehouses: Historical data storage, multidimensional analysis
- BI reports: Complex report queries, dashboard data sources
- ETL processing: Data extraction, transformation, loading
- Data analysis: Ad-hoc queries, data exploration
- HTAP mixed workloads: Analytical replicas
Workload characteristics:
- Complex queries (seconds to minutes)
- Low concurrent connections (tens to hundreds)
- Read-intensive, writes typically batch operations
- Throughput-sensitive, tolerates higher latency
- Scans large data volumes
Usage
Specify pg_conf = olap.yml in cluster definition:
pg-olap:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-olap
pg_conf: olap.yml # PostgreSQL analytics template
node_tune: olap # OS analytics tuning
Use olap.yml template for dedicated offline replicas:
pg-mixed:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: offline, pg_conf: olap.yml } # offline analytics replica
vars:
pg_cluster: pg-mixed
pg_conf: oltp.yml # primary and online replicas use OLTP
node_tune: oltp # OS OLTP tuning
Parameter Details
Connection Management
max_connections: 500
superuser_reserved_connections: 10
OLAP scenarios typically don’t need many connections; 500 is sufficient for most analytical workloads.
Memory Config
OLAP template uses more aggressive memory allocation:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 50% | Faster index creation and VACUUM |
work_mem | 64MB - 8GB | Larger sort/hash memory |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation (differs from OLTP):
work_mem = min(max(shared_buffers / max_connections, 64MB), 8GB)
Larger work_mem allows bigger sort and hash operations in memory, avoiding disk spill.
Locks & Transactions
max_locks_per_transaction: 2-4x maxconn # OLTP: 1-2x
OLAP queries may involve more tables (partitions, many JOINs), requiring more lock slots.
Parallel Query
OLAP template aggressively enables parallel queries:
max_worker_processes: cpu + 12 (min 20) # OLTP: cpu + 8
max_parallel_workers: 80% × cpu (min 2) # OLTP: 50%
max_parallel_workers_per_gather: 50% × cpu # OLTP: 20% (max 8)
max_parallel_maintenance_workers: 33% × cpu
Parallel cost estimates use defaults to favor parallel plans:
# parallel_setup_cost: 1000 # default, not doubled
# parallel_tuple_cost: 0.1 # default, not doubled
Partition-wise optimization enabled:
enable_partitionwise_join: on # smart partition JOIN
enable_partitionwise_aggregate: on # smart partition aggregation
IO Config (PG17+)
io_workers: 50% × cpu (4-32) # OLTP: 25% (4-16)
More IO workers support parallel large table scans.
WAL Config
min_wal_size: disk/20 (max 200GB)
max_wal_size: disk/5 (max 2000GB)
max_slot_wal_keep_size: disk×3/10 (max 3000GB)
temp_file_limit: disk/5 (max 2000GB) # OLTP: disk/20
Larger temp_file_limit allows bigger intermediate results to spill to disk.
Vacuum Config
OLAP template uses aggressive vacuum settings:
vacuum_cost_delay: 10ms # OLTP: 20ms, faster vacuum
vacuum_cost_limit: 10000 # OLTP: 2000, more work per round
autovacuum_max_workers: 3
autovacuum_naptime: 1min
autovacuum_vacuum_scale_factor: 0.08
autovacuum_analyze_scale_factor: 0.04
Analytical databases often have bulk writes requiring aggressive vacuum to reclaim space.
Query Optimization
random_page_cost: 1.1
effective_io_concurrency: 200
default_statistics_target: 1000 # OLTP: 400, more precise stats
Higher default_statistics_target provides more accurate query plans, crucial for complex analytics.
Logging & Monitoring
log_min_duration_statement: 1000 # OLTP: 100ms, relaxed threshold
log_statement: ddl
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024
log_autovacuum_min_duration: 1s
track_io_timing: on
track_cost_delay_timing: on # PG18+, track vacuum cost delay
track_functions: all
track_activity_query_size: 8192
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 0 # OLTP: 10min, disabled
Analytical queries may need to hold transactions for extended periods, so idle timeout is disabled.
Key Differences from OLTP
| Parameter | OLAP | OLTP | Reason |
|---|
| max_connections | 500 | 500-1000 | Fewer analytical connections |
| work_mem limit | 8GB | 1GB | Support larger in-memory sorts |
| maintenance_work_mem | 50% buffer | 25% buffer | Faster index creation |
| max_locks_per_transaction | 2-4x | 1-2x | More tables in queries |
| max_parallel_workers | 80% cpu | 50% cpu | Aggressive parallelism |
| max_parallel_workers_per_gather | 50% cpu | 20% cpu | Aggressive parallelism |
| parallel_setup_cost | 1000 | 2000 | Default, encourages parallel |
| parallel_tuple_cost | 0.1 | 0.2 | Default, encourages parallel |
| enable_partitionwise_join | on | off | Partition optimization |
| enable_partitionwise_aggregate | on | off | Partition optimization |
| vacuum_cost_delay | 10ms | 20ms | Aggressive vacuum |
| vacuum_cost_limit | 10000 | 2000 | Aggressive vacuum |
| temp_file_limit | 1/5 disk | 1/20 disk | Allow larger temp files |
| io_workers | 50% cpu | 25% cpu | More parallel IO |
| log_min_duration_statement | 1000ms | 100ms | Relaxed slow query threshold |
| default_statistics_target | 1000 | 400 | More precise stats |
| idle_in_transaction_session_timeout | Disabled | 10min | Allow long transactions |
With TimescaleDB
OLAP template works great with TimescaleDB:
pg-timeseries:
vars:
pg_conf: olap.yml
pg_libs: 'timescaledb, pg_stat_statements, auto_explain'
pg_extensions:
- timescaledb
With pg_duckdb
For ultimate analytical performance, combine with pg_duckdb:
pg-analytics:
vars:
pg_conf: olap.yml
pg_libs: 'pg_duckdb, pg_stat_statements, auto_explain'
Columnar Storage
Consider columnar storage extensions:
pg_extensions:
- citus_columnar # or pg_mooncake
Resource Isolation
For mixed workloads, isolate analytics to dedicated replicas:
pg-mixed:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary } # OLTP writes
10.10.10.12: { pg_seq: 2, pg_role: replica } # OLTP reads
10.10.10.13: { pg_seq: 3, pg_role: offline } # OLAP analytics
vars:
pg_cluster: pg-mixed
Monitoring Metrics
Focus on these metrics:
- Query time: Long query execution time distribution
- Parallelism: Parallel worker utilization
- Temp files: Temp file size and count
- Disk IO: Sequential and index scan IO volume
- Cache hit ratio: shared_buffers and OS cache hit rates
References
4 - CRIT Template
PostgreSQL config template optimized for critical/financial workloads with data safety and audit compliance
crit.yml is optimized for critical/financial workloads. Designed for 4-128 core CPUs with forced sync replication, data checksums, full audit logging, and strict security. Trades performance for maximum data safety.
Pair with node_tune = crit for OS-level tuning, optimizing dirty page management.
Use Cases
CRIT template is ideal for:
- Financial transactions: Bank transfers, payment settlement, securities trading
- Core accounting: General ledger systems, accounting systems
- Compliance audit: Businesses requiring complete operation records
- Critical business: Any scenario that cannot tolerate data loss
Requirements:
- Zero data loss (RPO = 0)
- Data integrity verification
- Complete audit logs
- Strict security policies
- Acceptable performance trade-offs
Usage
Specify pg_conf = crit.yml in cluster definition:
pg-finance:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-finance
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
Recommendation: Critical clusters should have at least 3 nodes to maintain sync replication when one node fails.
Core Features
Forced Sync Replication
CRIT template forces sync replication regardless of pg_rpo setting:
synchronous_mode: true # forced on, ignores pg_rpo
Every transaction commit waits for at least one replica confirmation, ensuring RPO = 0 (zero data loss).
Cost: Write latency increases (typically 1-5ms depending on network).
Forced Data Checksums
CRIT template forces data checksums regardless of pg_checksum setting:
initdb:
- data-checksums # forced on, ignores pg_checksum
Data checksums detect silent disk corruption (bit rot), critical for financial data.
Disabled Parallel Query
CRIT template disables parallel query gather operations:
max_parallel_workers_per_gather: 0 # parallel queries disabled
Parallel cost estimates are also increased:
parallel_setup_cost: 2000
parallel_tuple_cost: 0.2
min_parallel_table_scan_size: 32MB
min_parallel_index_scan_size: 2MB
Reason: Parallel queries may cause unstable latency. For latency-sensitive financial transactions, predictable stable performance is more important.
Parameter Details
Connection Management
max_connections: 500/1000 # depends on pgbouncer usage
superuser_reserved_connections: 10
Same as OLTP template.
Memory Config
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 64MB - 1GB | Same as OLTP |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
WAL Config (Key Differences)
wal_writer_delay: 10ms # OLTP: 20ms, more frequent flush
wal_writer_flush_after: 0 # OLTP: 1MB, immediate flush, no buffer
idle_replication_slot_timeout: 3d # OLTP: 7d, stricter slot cleanup
wal_writer_flush_after: 0 ensures every WAL write flushes to disk immediately, minimizing data loss risk.
Replication Config (PG15-)
vacuum_defer_cleanup_age: 500000 # PG15 and below only
Preserves 500K recent transactions from vacuum cleanup, providing more catchup buffer for replicas.
Audit Logging (Key Differences)
CRIT template enables full connection audit:
PostgreSQL 18+:
log_connections: 'receipt,authentication,authorization'
PostgreSQL 17 and below:
log_connections: 'on'
log_disconnections: 'on'
Records complete connection lifecycle:
- Connection receipt
- Authentication process
- Authorization result
- Disconnection
Query Logging
log_min_duration_statement: 100 # log queries > 100ms
log_statement: ddl # log all DDL
track_activity_query_size: 32768 # OLTP: 8192, capture full queries
32KB track_activity_query_size ensures capturing complete long query text.
Statistics Tracking
track_io_timing: on
track_cost_delay_timing: on # PG18+, track vacuum cost delay
track_functions: all
track_activity_query_size: 32768
Client Timeouts (Key Differences)
idle_in_transaction_session_timeout: 1min # OLTP: 10min, stricter
1-minute idle transaction timeout quickly releases zombie transactions holding locks.
Extension Config
shared_preload_libraries: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
Note: CRIT template loads passwordcheck by default, enforcing password complexity.
Key Differences from OLTP
| Parameter | CRIT | OLTP | Reason |
|---|
| synchronous_mode | Forced true | Depends on pg_rpo | Zero data loss |
| data-checksums | Forced on | Optional | Data integrity |
| max_parallel_workers_per_gather | 0 | 20% cpu | Stable latency |
| wal_writer_delay | 10ms | 20ms | More frequent flush |
| wal_writer_flush_after | 0 | 1MB | Immediate flush |
| idle_replication_slot_timeout | 3d | 7d | Stricter cleanup |
| idle_in_transaction_session_timeout | 1min | 10min | Quick lock release |
| track_activity_query_size | 32KB | 8KB | Complete query capture |
| log_connections | Full logging | Auth only | Audit compliance |
| log_disconnections | on | off | Audit compliance |
| passwordcheck | Enabled | Not enabled | Password security |
| vacuum_defer_cleanup_age | 500000 | 0 | Replica catchup buffer |
Using CRIT template has these impacts:
Increased Write Latency
Sync replication adds 1-5ms write latency (network-dependent):
Async replication: commit -> local flush -> return to client
Sync replication: commit -> local flush -> wait replica confirm -> return to client
Reduced Write Throughput
Due to replica confirmation wait, write TPS may drop 10-30%.
More Stable Query Latency
With parallel queries disabled, query latency is more predictable without parallel startup overhead variance.
Slightly Increased Resource Overhead
More frequent WAL flushes and complete audit logs add extra IO overhead.
HA Configuration
Minimum Recommended Setup
pg-critical:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-critical
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
3-node setup ensures sync replication continues when one node fails.
Cross-DC Deployment
For financial-grade disaster recovery:
pg-critical:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary, pg_weight: 100 } # DC A
10.10.10.12: { pg_seq: 2, pg_role: replica, pg_weight: 100 } # DC A
10.20.10.13: { pg_seq: 3, pg_role: replica, pg_weight: 0 } # DC B (standby)
vars:
pg_cluster: pg-critical
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
Quorum Commit
For higher consistency, configure multiple sync replicas:
$ pg edit-config pg-critical
synchronous_mode: true
synchronous_node_count: 2 # require 2 replica confirmations
Security Hardening Tips
Password Policy
CRIT template has passwordcheck enabled; further configure:
-- Set password encryption
ALTER SYSTEM SET password_encryption = 'scram-sha-256';
Audit Extension
Consider pgaudit for detailed auditing:
pg_libs: 'pg_stat_statements, auto_explain, pgaudit'
pg_parameters:
pgaudit.log: 'ddl, role, write'
Network Isolation
Ensure database network is isolated; use HBA rules to restrict access.
Monitoring Metrics
For critical clusters, focus on:
- Replication lag: Sync lag should be near zero
- Transaction commit time: p99 latency
- Lock waits: Long lock waits may impact business
- Checkpoints: Checkpoint duration and frequency
- WAL generation rate: Predict disk space needs
References
5 - TINY Template
PostgreSQL config template optimized for micro instances and resource-constrained environments
tiny.yml is optimized for micro instances and resource-constrained environments. Designed for 1-3 core CPUs with minimal resource usage, conservative memory allocation, and disabled parallel queries.
Pair with node_tune = tiny for OS-level tuning.
Use Cases
TINY template is ideal for:
- Dev/test: Local development, CI/CD testing
- Low-spec VMs: 1-2 core CPU, 1-4GB RAM cloud instances
- Edge computing: Raspberry Pi, embedded devices
- Demos: Quick Pigsty experience
- Personal projects: Resource-limited blogs, small apps
Resource constraints:
- 1-3 CPU cores
- 1-8 GB RAM
- Limited disk space
- May share resources with other services
Usage
Specify pg_conf = tiny.yml in cluster definition:
pg-dev:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-dev
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Single-node development:
pg-local:
hosts:
127.0.0.1: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-local
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Parameter Details
Connection Management
max_connections: 250 # OLTP: 500-1000, reduced connection overhead
superuser_reserved_connections: 10
Micro instances don’t need many concurrent connections; 250 is sufficient for dev/test.
Memory Config
TINY template uses conservative memory allocation:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 16MB - 256MB | Smaller sort/hash memory |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation (differs from OLTP):
work_mem = min(max(shared_buffers / max_connections, 16MB), 256MB)
Smaller work_mem limit (256MB vs OLTP’s 1GB) prevents memory exhaustion.
Parallel Query (Fully Disabled)
TINY template completely disables parallel queries:
max_worker_processes: cpu + 4 (min 12) # OLTP: cpu + 8
max_parallel_workers: 50% × cpu (min 1) # OLTP: 50% (min 2)
max_parallel_workers_per_gather: 0 # parallel queries disabled
max_parallel_maintenance_workers: 33% × cpu (min 1)
max_parallel_workers_per_gather: 0 ensures queries won’t spawn parallel workers, avoiding resource contention on low-core systems.
IO Config (PG17+)
io_workers: 3 # fixed value, OLTP: 25% cpu (4-16)
Fixed low IO worker count suitable for resource-constrained environments.
Vacuum Config
vacuum_cost_delay: 20ms
vacuum_cost_limit: 2000
autovacuum_max_workers: 2 # OLTP: 3, one fewer worker
autovacuum_naptime: 1min
# autovacuum_vacuum_scale_factor uses default
# autovacuum_analyze_scale_factor uses default
Fewer autovacuum workers reduce background resource usage.
Query Optimization
random_page_cost: 1.1
effective_io_concurrency: 200
default_statistics_target: 200 # OLTP: 400, lower precision saves space
Lower default_statistics_target reduces pg_statistic table size.
Logging Config
log_min_duration_statement: 100 # same as OLTP
log_statement: ddl
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024
# log_connections uses default (no extra logging)
TINY template doesn’t enable extra connection logging to reduce log volume.
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 10min # same as OLTP
Extension Config
shared_preload_libraries: 'pg_stat_statements, auto_explain'
pg_stat_statements.max: 2500 # OLTP: 10000, reduced memory usage
pg_stat_statements.track: all
pg_stat_statements.track_utility: off
pg_stat_statements.track_planning: off
pg_stat_statements.max reduced from 10000 to 2500, saving ~75% memory.
Key Differences from OLTP
| Parameter | TINY | OLTP | Reason |
|---|
| max_connections | 250 | 500-1000 | Reduce connection overhead |
| work_mem limit | 256MB | 1GB | Prevent memory exhaustion |
| max_worker_processes | cpu+4 | cpu+8 | Fewer background processes |
| max_parallel_workers_per_gather | 0 | 20% cpu | Disable parallel queries |
| autovacuum_max_workers | 2 | 3 | Reduce background load |
| default_statistics_target | 200 | 400 | Save space |
| pg_stat_statements.max | 2500 | 10000 | Reduce memory usage |
| io_workers | 3 | 25% cpu | Fixed low value |
Resource Estimates
TINY template resource usage by configuration:
1 Core 1GB RAM
shared_buffers: ~256MB
work_mem: ~16MB
maintenance_work_mem: ~64MB
max_connections: 250
max_worker_processes: ~12
PostgreSQL process memory: ~400-600MB
2 Core 4GB RAM
shared_buffers: ~1GB
work_mem: ~32MB
maintenance_work_mem: ~256MB
max_connections: 250
max_worker_processes: ~12
PostgreSQL process memory: ~1.5-2GB
4 Core 8GB RAM
Consider using OLTP template instead:
pg-small:
vars:
pg_conf: oltp.yml # 4C8G can use OLTP template
Further Resource Reduction
For extremely constrained resources:
pg_parameters:
max_connections: 100 # further reduce
shared_buffers: 128MB # further reduce
maintenance_work_mem: 32MB
work_mem: 8MB
Disable Unnecessary Extensions
pg_libs: 'pg_stat_statements' # keep only essential extensions
Disable Unnecessary Features
pg_parameters:
track_io_timing: off # disable IO timing tracking
track_functions: none # disable function tracking
Use External Connection Pool
Even on micro instances, PgBouncer significantly improves concurrency:
pg-tiny:
vars:
pg_conf: tiny.yml
pg_default_service_dest: pgbouncer
pgbouncer_poolmode: transaction
AWS
- t3.micro: 1 vCPU, 1GB RAM - suitable for TINY
- t3.small: 2 vCPU, 2GB RAM - suitable for TINY
- t3.medium: 2 vCPU, 4GB RAM - consider OLTP
Alibaba Cloud
- ecs.t6-c1m1.small: 1 vCPU, 1GB RAM - suitable for TINY
- ecs.t6-c1m2.small: 1 vCPU, 2GB RAM - suitable for TINY
- ecs.t6-c1m4.small: 1 vCPU, 4GB RAM - suitable for TINY
Tencent Cloud
- SA2.SMALL1: 1 vCPU, 1GB RAM - suitable for TINY
- SA2.SMALL2: 1 vCPU, 2GB RAM - suitable for TINY
- SA2.SMALL4: 1 vCPU, 4GB RAM - suitable for TINY
Edge Device Deployment
Raspberry Pi 4
pg-pi:
hosts:
192.168.1.100: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-pi
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
pg_storage_type: SSD # SSD storage recommended
Docker Container
pg-docker:
hosts:
172.17.0.2: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-docker
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Upgrading to OLTP
When your application grows and needs more resources, easily upgrade to OLTP template:
- Upgrade VM specs (4 core 8GB+)
- Modify cluster config:
pg-growing:
vars:
pg_conf: oltp.yml # change from tiny.yml to oltp.yml
node_tune: oltp # change from tiny to oltp
- Reconfigure cluster or redeploy
References