Config Templates
Use Pigsty’s built-in Patroni config templates or customize your own
Pigsty provides four preset Patroni/PostgreSQL config templates optimized for different workloads:
| Template | CPU Cores | Use Case | Characteristics |
|---|
oltp.yml | 4-128C | OLTP transactions | High concurrency, low latency |
olap.yml | 4-128C | OLAP analytics | Large queries, high parallelism |
crit.yml | 4-128C | Critical/Finance | Data safety, audit, zero-loss |
tiny.yml | 1-3C | Tiny instances | Resource-constrained envs |
Use pg_conf to select a template; default is oltp.yml.
The database tuning template pg_conf should be paired with the OS tuning template node_tune.
Usage
Set pg_conf in your cluster definition.
It’s recommended to set node_tune accordingly for OS-level tuning:
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-test
pg_conf: oltp.yml # PostgreSQL config template (default)
node_tune: oltp # OS tuning template (default)
For critical financial workloads, use crit.yml:
pg-finance:
hosts:
10.10.10.21: { pg_seq: 1, pg_role: primary }
10.10.10.22: { pg_seq: 2, pg_role: replica }
10.10.10.23: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-finance
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
For low-spec VMs or dev environments, use tiny.yml:
pg-dev:
hosts:
10.10.10.31: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-dev
pg_conf: tiny.yml # PostgreSQL tiny template
node_tune: tiny # OS tiny tuning
Comparison
The four templates differ significantly in key parameters:
Connections & Memory
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| max_connections | 500/1000 | 500 | 500/1000 | 250 |
| work_mem range | 64MB-1GB | 64MB-8GB | 64MB-1GB | 16MB-256MB |
| maintenance_work_mem | 25% shmem | 50% shmem | 25% shmem | 25% shmem |
| max_locks_per_transaction | 1-2x maxconn | 2-4x maxconn | 1-2x maxconn | 1-2x maxconn |
Parallel Query
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| max_worker_processes | cpu+8 | cpu+12 | cpu+8 | cpu+4 |
| max_parallel_workers | 50% cpu | 80% cpu | 50% cpu | 50% cpu |
| max_parallel_workers_per_gather | 20% cpu (max 8) | 50% cpu | 0 (off) | 0 (off) |
| parallel_setup_cost | 2000 | 1000 | 2000 | 1000 |
| parallel_tuple_cost | 0.2 | 0.1 | 0.2 | 0.1 |
Sync Replication
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| synchronous_mode | depends pg_rpo | depends pg_rpo | forced on | depends pg_rpo |
| data_checksums | optional | optional | forced on | optional |
Vacuum Config
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| vacuum_cost_delay | 20ms | 10ms | 20ms | 20ms |
| vacuum_cost_limit | 2000 | 10000 | 2000 | 2000 |
| autovacuum_max_workers | 3 | 3 | 3 | 2 |
Timeout & Security
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| idle_in_transaction_session_timeout | 10min | off | 1min | 10min |
| log_min_duration_statement | 100ms | 1000ms | 100ms | 100ms |
| default_statistics_target | 400 | 1000 | 400 | 200 |
| track_activity_query_size | 8KB | 8KB | 32KB | 8KB |
| log_connections | auth | auth | full | default |
IO Config (PG17+)
| Parameter | OLTP | OLAP | CRIT | TINY |
|---|
| io_workers | 25% cpu (4-16) | 50% cpu (4-32) | 25% cpu (4-8) | 3 |
| temp_file_limit | 1/20 disk | 1/5 disk | 1/20 disk | 1/20 disk |
Selection Guide
OLTP Template: Default choice for most transaction processing. Ideal for e-commerce, social, gaming apps.
OLAP Template: For data warehouses, BI reports, ETL. Allows large queries, high parallelism, relaxed timeouts.
CRIT Template: For financial transactions, core accounting with strict consistency/security requirements. Forced sync replication, checksums, full audit.
TINY Template: For dev/test environments, resource-constrained VMs, Raspberry Pi. Minimizes resource usage, disables parallel queries.
Custom Templates
Create custom templates based on existing ones. Templates are in roles/pgsql/templates/:
roles/pgsql/templates/
├── oltp.yml # OLTP template (default)
├── olap.yml # OLAP template
├── crit.yml # CRIT critical template
└── tiny.yml # TINY micro template
Steps to create a custom template:
- Copy an existing template as base
- Modify parameters as needed
- Place in
roles/pgsql/templates/ - Reference via
pg_conf
Example:
cp roles/pgsql/templates/oltp.yml roles/pgsql/templates/myapp.yml
# Edit myapp.yml as needed
Then use in your cluster:
pg-myapp:
vars:
pg_conf: myapp.yml
Templates use Jinja2 syntax; parameters are dynamically computed based on node resources (CPU, memory, disk).
Tuning Strategy
For technical details on template parameter optimization, see Tuning Strategy:
- Memory tuning (shared buffers, work mem, max connections)
- CPU tuning (parallel query worker config)
- Storage tuning (WAL size, temp file limits)
- Manual parameter adjustment
1 - 默认配置模板的参数优化策略说明
了解在 Pigsty 中,预置的四种 Patroni 场景化模板所采用的不同参数优化策略
Pigsty 默认提供了四套场景化参数模板,可以通过 pg_conf 参数指定并使用。
Pigsty 会针对这四种默认场景,采取不同的参数优化策略,如下所示:
内存参数调整
Pigsty 默认会检测系统的内存大小,并以此为依据设定最大连接数量与内存相关参数。
默认情况下,Pigsty 使用 25% 的内存作为 PostgreSQL 共享缓冲区,剩余的 75% 作为操作系统缓存。
默认情况下,如果用户没有设置一个 pg_max_conn 最大连接数,Pigsty 会根据以下规则使用默认值:
- oltp: 500 (pgbouncer) / 1000 (postgres)
- crit: 500 (pgbouncer) / 1000 (postgres)
- tiny: 300
- olap: 300
其中对于 OLTP 与 CRIT 模版来说,如果服务没有指向 pgbouncer 连接池,而是直接连接 postgres 数据库,最大连接会翻倍至 1000 条。
决定最大连接数后,work_mem 会根据共享内存数量 / 最大连接数计算得到,并限定在 64MB ~ 1GB 的范围内。
{% raw %}
{% if pg_max_conn != 'auto' and pg_max_conn|int >= 20 %}{% set pg_max_connections = pg_max_conn|int %}{% else %}{% if pg_default_service_dest|default('postgres') == 'pgbouncer' %}{% set pg_max_connections = 500 %}{% else %}{% set pg_max_connections = 1000 %}{% endif %}{% endif %}
{% set pg_max_prepared_transactions = pg_max_connections if 'citus' in pg_libs else 0 %}
{% set pg_max_locks_per_transaction = (2 * pg_max_connections)|int if 'citus' in pg_libs or 'timescaledb' in pg_libs else pg_max_connections %}
{% set pg_shared_buffers = (node_mem_mb|int * pg_shared_buffer_ratio|float) | round(0, 'ceil') | int %}
{% set pg_maintenance_mem = (pg_shared_buffers|int * 0.25)|round(0, 'ceil')|int %}
{% set pg_effective_cache_size = node_mem_mb|int - pg_shared_buffers|int %}
{% set pg_workmem = ([ ([ (pg_shared_buffers / pg_max_connections)|round(0,'floor')|int , 64 ])|max|int , 1024])|min|int %}
{% endraw %}
CPU参数调整
在 PostgreSQL 中,有 4 个与并行查询相关的重要参数,Pigsty 会自动根据当前系统的 CPU 核数进行参数优化。
在所有策略中,总并行进程数量(总预算)通常设置为 CPU 核数 + 8,且保底为 16 个,从而为逻辑复制与扩展预留足够的后台 worker 数量,OLAP 和 TINY 模板根据场景略有不同。
| OLTP | 设置逻辑 | 范围限制 |
|---|
max_worker_processes | max(100% CPU + 8, 16) | 核数 + 4,保底 1, |
max_parallel_workers | max(ceil(50% CPU), 2) | 1/2 CPU 上取整,最少两个 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU 上取整,最少两个 |
max_parallel_workers_per_gather | min(max(ceil(20% CPU), 2),8) | 1/5 CPU 下取整,最少两个,最多 8 个 |
| OLAP | 设置逻辑 | 范围限制 |
|---|
max_worker_processes | max(100% CPU + 12, 20) | 核数 + 12,保底 20 |
max_parallel_workers | max(ceil(80% CPU, 2)) | 4/5 CPU 上取整,最少两个 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU 上取整,最少两个 |
max_parallel_workers_per_gather | max(floor(50% CPU), 2) | 1/2 CPU 上取整,最少两个 |
| CRIT | 设置逻辑 | 范围限制 |
|---|
max_worker_processes | max(100% CPU + 8, 16) | 核数 + 8,保底 16 |
max_parallel_workers | max(ceil(50% CPU), 2) | 1/2 CPU 上取整,最少两个 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 2) | 1/3 CPU 上取整,最少两个 |
max_parallel_workers_per_gather | 0, 按需启用 | |
| TINY | 设置逻辑 | 范围限制 |
|---|
max_worker_processes | max(100% CPU + 4, 12) | 核数 + 4,保底 12 |
max_parallel_workers | max(ceil(50% CPU) 1) | 50% CPU 下取整,最少1个 |
max_parallel_maintenance_workers | max(ceil(33% CPU), 1) | 33% CPU 下取整,最少1个 |
max_parallel_workers_per_gather | `0, 按需启用 | |
请注意,CRIT 和 TINY 模板直接通过设置 max_parallel_workers_per_gather = 0 关闭了并行查询。
用户可以按需在需要时设置此参数以启用并行查询。
OLTP 和 CRIT 模板都额外设置了以下参数,将并行查询的 Cost x 2,以降低使用并行查询的倾向。
parallel_setup_cost: 2000 # double from 100 to increase parallel cost
parallel_tuple_cost: 0.2 # double from 0.1 to increase parallel cost
min_parallel_table_scan_size: 16MB # double from 8MB to increase parallel cost
min_parallel_index_scan_size: 1024 # double from 512 to increase parallel cost
请注意 max_worker_processes 参数的调整必须在重启后才能生效。此外,当从库的本参数配置值高于主库时,从库将无法启动。
此参数必须通过 patroni 配置管理进行调整,该参数由 Patroni 管理,用于确保主从配置一致,避免在故障切换时新从库无法启动。
存储空间参数
Pigsty 默认检测 /data/postgres 主数据目录所在磁盘的总空间,并以此作为依据指定下列参数:
{% raw %}
min_wal_size: {{ ([pg_size_twentieth, 200])|min }}GB # 1/20 disk size, max 200GB
max_wal_size: {{ ([pg_size_twentieth * 4, 2000])|min }}GB # 2/10 disk size, max 2000GB
max_slot_wal_keep_size: {{ ([pg_size_twentieth * 6, 3000])|min }}GB # 3/10 disk size, max 3000GB
temp_file_limit: {{ ([pg_size_twentieth, 200])|min }}GB # 1/20 of disk size, max 200GB
{% endraw %}
temp_file_limit 默认为磁盘空间的 5%,封顶不超过 200GB。min_wal_size 默认为磁盘空间的 5%,封顶不超过 200GB。max_wal_size 默认为磁盘空间的 20%,封顶不超过 2TB。max_slot_wal_keep_size 默认为磁盘空间的 30%,封顶不超过 3TB。
作为特例, OLAP 模板允许 20% 的 temp_file_limit ,封顶不超过 2TB
手工调整参数
除了使用 Pigsty 自动配置的参数外,您还可以手工调整 PostgreSQL 参数。
使用 pg edit-config <cluster> 命令可以交互式编辑集群配置:
或者使用 -p 参数直接设置参数:
pg edit-config -p log_min_duration_statement=1000 pg-meta
pg edit-config --force -p shared_preload_libraries='timescaledb, pg_cron, pg_stat_statements, auto_explain' pg-meta
您也可以使用 Patroni REST API 来修改配置:
curl -u 'postgres:Patroni.API' \
-d '{"postgresql":{"parameters": {"log_min_duration_statement":200}}}' \
-s -X PATCH http://10.10.10.10:8008/config | jq .
2 - OLTP Template
PostgreSQL config template optimized for online transaction processing workloads
oltp.yml is Pigsty’s default config template, optimized for online transaction processing (OLTP). Designed for 4-128 core CPUs with high concurrency, low latency, and high throughput.
Pair with node_tune = oltp for OS-level tuning.
Use Cases
OLTP template is ideal for:
- E-commerce: Order processing, inventory, user transactions
- Social apps: User feeds, messaging, following relationships
- Gaming backends: Player data, leaderboards, game state
- SaaS applications: Multi-tenant business systems
- Web apps: CRUD-intensive workloads
Workload characteristics:
- Many short transactions (millisecond-level)
- High concurrent connections (hundreds to thousands)
- Read/write ratio typically 7:3 to 9:1
- Latency-sensitive, requires fast response
- High data consistency requirements
Usage
oltp.yml is the default template, no explicit specification needed:
pg-oltp:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-oltp
# pg_conf: oltp.yml # PostgreSQL config template (default)
# node_tune: oltp # OS tuning template (default)
Or explicitly specify:
pg-oltp:
vars:
pg_conf: oltp.yml # PostgreSQL config template
node_tune: oltp # OS tuning template
Parameter Details
Connection Management
max_connections: 500/1000 # depends on pgbouncer usage
superuser_reserved_connections: 10
- When
pg_default_service_dest is pgbouncer, max_connections is set to 500 - When traffic connects directly to PostgreSQL,
max_connections is set to 1000 - Override via
pg_max_conn parameter
Memory Config
OLTP template memory allocation strategy:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 64MB - 1GB | Based on shared_buffers/max_connections |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation:
work_mem = min(max(shared_buffers / max_connections, 64MB), 1GB)
Ensures each connection has sufficient sort/hash memory without over-allocation.
Parallel Query
OLTP template moderately limits parallel queries to prevent resource contention:
max_worker_processes: cpu + 8 (min 16)
max_parallel_workers: 50% × cpu (min 2)
max_parallel_workers_per_gather: 20% × cpu (2-8)
max_parallel_maintenance_workers: 33% × cpu (min 2)
Parallel cost estimates are increased to favor serial execution:
parallel_setup_cost: 2000 # 2x default (1000)
parallel_tuple_cost: 0.2 # 2x default (0.1)
min_parallel_table_scan_size: 16MB # 2x default (8MB)
min_parallel_index_scan_size: 1024 # 2x default (512)
WAL Config
min_wal_size: disk/20 (max 200GB)
max_wal_size: disk/5 (max 2000GB)
max_slot_wal_keep_size: disk×3/10 (max 3000GB)
wal_buffers: 16MB
wal_writer_delay: 20ms
wal_writer_flush_after: 1MB
commit_delay: 20
commit_siblings: 10
checkpoint_timeout: 15min
checkpoint_completion_target: 0.80
Balances data safety and write performance.
Vacuum Config
vacuum_cost_delay: 20ms # sleep after each vacuum round
vacuum_cost_limit: 2000 # cost limit per vacuum round
autovacuum_max_workers: 3
autovacuum_naptime: 1min
autovacuum_vacuum_scale_factor: 0.08 # 8% table change triggers vacuum
autovacuum_analyze_scale_factor: 0.04 # 4% table change triggers analyze
autovacuum_freeze_max_age: 1000000000
Conservative vacuum settings avoid impacting online transaction performance.
Query Optimization
random_page_cost: 1.1 # SSD optimized
effective_io_concurrency: 200 # SSD concurrent IO
default_statistics_target: 400 # Statistics precision
Enables planner to generate better query plans.
Logging & Monitoring
log_min_duration_statement: 100 # log queries > 100ms
log_statement: ddl # log DDL statements
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024 # log temp files > 1MB
log_autovacuum_min_duration: 1s
track_io_timing: on
track_functions: all
track_activity_query_size: 8192
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 10min
10-minute idle transaction timeout prevents zombie transactions holding locks.
Extension Config
shared_preload_libraries: 'pg_stat_statements, auto_explain'
# auto_explain
auto_explain.log_min_duration: 1s
auto_explain.log_analyze: on
auto_explain.log_verbose: on
auto_explain.log_timing: on
auto_explain.log_nested_statements: true
# pg_stat_statements
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.track_utility: off
pg_stat_statements.track_planning: off
Template Comparison
| Feature | OLTP | OLAP | CRIT |
|---|
| max_connections | 500-1000 | 500 | 500-1000 |
| work_mem | 64MB-1GB | 64MB-8GB | 64MB-1GB |
| Parallel query | Moderate limit | Aggressive | Disabled |
| Vacuum intensity | Conservative | Aggressive | Conservative |
| Txn timeout | 10min | Disabled | 1min |
| Slow query threshold | 100ms | 1000ms | 100ms |
Why OLTP over OLAP?
- Queries are mostly simple point/range lookups
- Transaction response time requires milliseconds
- High concurrent connections
- No complex analytical queries
Why OLTP over CRIT?
- Small probability of data loss acceptable (async replication)
- Complete audit logs not required
- Better write performance desired
Connection Pooling
For high concurrency, use PgBouncer connection pool:
pg-oltp:
vars:
pg_default_service_dest: pgbouncer # default
pgbouncer_poolmode: transaction # transaction-level pooling
Read Separation
Use read replicas to share read load:
pg-oltp:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
Monitoring Metrics
Focus on these metrics:
- Connections: Active/waiting connection counts
- Transaction rate: TPS, commit/rollback ratio
- Response time: Query latency percentiles (p50/p95/p99)
- Lock waits: Lock wait time, deadlock counts
- Replication lag: Replica delay time and bytes
References
3 - OLAP Template
PostgreSQL config template optimized for online analytical processing workloads
olap.yml is optimized for online analytical processing (OLAP). Designed for 4-128 core CPUs with support for large queries, high parallelism, relaxed timeouts, and aggressive vacuum.
Pair with node_tune = olap for OS-level tuning.
Use Cases
OLAP template is ideal for:
- Data warehouses: Historical data storage, multidimensional analysis
- BI reports: Complex report queries, dashboard data sources
- ETL processing: Data extraction, transformation, loading
- Data analysis: Ad-hoc queries, data exploration
- HTAP mixed workloads: Analytical replicas
Workload characteristics:
- Complex queries (seconds to minutes)
- Low concurrent connections (tens to hundreds)
- Read-intensive, writes typically batch operations
- Throughput-sensitive, tolerates higher latency
- Scans large data volumes
Usage
Specify pg_conf = olap.yml in cluster definition:
pg-olap:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-olap
pg_conf: olap.yml # PostgreSQL analytics template
node_tune: olap # OS analytics tuning
Use olap.yml template for dedicated offline replicas:
pg-mixed:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: offline, pg_conf: olap.yml } # offline analytics replica
vars:
pg_cluster: pg-mixed
pg_conf: oltp.yml # primary and online replicas use OLTP
node_tune: oltp # OS OLTP tuning
Parameter Details
Connection Management
max_connections: 500
superuser_reserved_connections: 10
OLAP scenarios typically don’t need many connections; 500 is sufficient for most analytical workloads.
Memory Config
OLAP template uses more aggressive memory allocation:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 50% | Faster index creation and VACUUM |
work_mem | 64MB - 8GB | Larger sort/hash memory |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation (differs from OLTP):
work_mem = min(max(shared_buffers / max_connections, 64MB), 8GB)
Larger work_mem allows bigger sort and hash operations in memory, avoiding disk spill.
Locks & Transactions
max_locks_per_transaction: 2-4x maxconn # OLTP: 1-2x
OLAP queries may involve more tables (partitions, many JOINs), requiring more lock slots.
Parallel Query
OLAP template aggressively enables parallel queries:
max_worker_processes: cpu + 12 (min 20) # OLTP: cpu + 8
max_parallel_workers: 80% × cpu (min 2) # OLTP: 50%
max_parallel_workers_per_gather: 50% × cpu # OLTP: 20% (max 8)
max_parallel_maintenance_workers: 33% × cpu
Parallel cost estimates use defaults to favor parallel plans:
# parallel_setup_cost: 1000 # default, not doubled
# parallel_tuple_cost: 0.1 # default, not doubled
Partition-wise optimization enabled:
enable_partitionwise_join: on # smart partition JOIN
enable_partitionwise_aggregate: on # smart partition aggregation
IO Config (PG17+)
io_workers: 50% × cpu (4-32) # OLTP: 25% (4-16)
More IO workers support parallel large table scans.
WAL Config
min_wal_size: disk/20 (max 200GB)
max_wal_size: disk/5 (max 2000GB)
max_slot_wal_keep_size: disk×3/10 (max 3000GB)
temp_file_limit: disk/5 (max 2000GB) # OLTP: disk/20
Larger temp_file_limit allows bigger intermediate results to spill to disk.
Vacuum Config
OLAP template uses aggressive vacuum settings:
vacuum_cost_delay: 10ms # OLTP: 20ms, faster vacuum
vacuum_cost_limit: 10000 # OLTP: 2000, more work per round
autovacuum_max_workers: 3
autovacuum_naptime: 1min
autovacuum_vacuum_scale_factor: 0.08
autovacuum_analyze_scale_factor: 0.04
Analytical databases often have bulk writes requiring aggressive vacuum to reclaim space.
Query Optimization
random_page_cost: 1.1
effective_io_concurrency: 200
default_statistics_target: 1000 # OLTP: 400, more precise stats
Higher default_statistics_target provides more accurate query plans, crucial for complex analytics.
Logging & Monitoring
log_min_duration_statement: 1000 # OLTP: 100ms, relaxed threshold
log_statement: ddl
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024
log_autovacuum_min_duration: 1s
track_io_timing: on
track_cost_delay_timing: on # PG18+, track vacuum cost delay
track_functions: all
track_activity_query_size: 8192
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 0 # OLTP: 10min, disabled
Analytical queries may need to hold transactions for extended periods, so idle timeout is disabled.
Key Differences from OLTP
| Parameter | OLAP | OLTP | Reason |
|---|
| max_connections | 500 | 500-1000 | Fewer analytical connections |
| work_mem limit | 8GB | 1GB | Support larger in-memory sorts |
| maintenance_work_mem | 50% buffer | 25% buffer | Faster index creation |
| max_locks_per_transaction | 2-4x | 1-2x | More tables in queries |
| max_parallel_workers | 80% cpu | 50% cpu | Aggressive parallelism |
| max_parallel_workers_per_gather | 50% cpu | 20% cpu | Aggressive parallelism |
| parallel_setup_cost | 1000 | 2000 | Default, encourages parallel |
| parallel_tuple_cost | 0.1 | 0.2 | Default, encourages parallel |
| enable_partitionwise_join | on | off | Partition optimization |
| enable_partitionwise_aggregate | on | off | Partition optimization |
| vacuum_cost_delay | 10ms | 20ms | Aggressive vacuum |
| vacuum_cost_limit | 10000 | 2000 | Aggressive vacuum |
| temp_file_limit | 1/5 disk | 1/20 disk | Allow larger temp files |
| io_workers | 50% cpu | 25% cpu | More parallel IO |
| log_min_duration_statement | 1000ms | 100ms | Relaxed slow query threshold |
| default_statistics_target | 1000 | 400 | More precise stats |
| idle_in_transaction_session_timeout | Disabled | 10min | Allow long transactions |
With TimescaleDB
OLAP template works great with TimescaleDB:
pg-timeseries:
vars:
pg_conf: olap.yml
pg_libs: 'timescaledb, pg_stat_statements, auto_explain'
pg_extensions:
- timescaledb
With pg_duckdb
For ultimate analytical performance, combine with pg_duckdb:
pg-analytics:
vars:
pg_conf: olap.yml
pg_libs: 'pg_duckdb, pg_stat_statements, auto_explain'
Columnar Storage
Consider columnar storage extensions:
pg_extensions:
- citus_columnar # or pg_mooncake
Resource Isolation
For mixed workloads, isolate analytics to dedicated replicas:
pg-mixed:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary } # OLTP writes
10.10.10.12: { pg_seq: 2, pg_role: replica } # OLTP reads
10.10.10.13: { pg_seq: 3, pg_role: offline } # OLAP analytics
vars:
pg_cluster: pg-mixed
Monitoring Metrics
Focus on these metrics:
- Query time: Long query execution time distribution
- Parallelism: Parallel worker utilization
- Temp files: Temp file size and count
- Disk IO: Sequential and index scan IO volume
- Cache hit ratio: shared_buffers and OS cache hit rates
References
4 - CRIT Template
PostgreSQL config template optimized for critical/financial workloads with data safety and audit compliance
crit.yml is optimized for critical/financial workloads. Designed for 4-128 core CPUs with forced sync replication, data checksums, full audit logging, and strict security. Trades performance for maximum data safety.
Pair with node_tune = crit for OS-level tuning, optimizing dirty page management.
Use Cases
CRIT template is ideal for:
- Financial transactions: Bank transfers, payment settlement, securities trading
- Core accounting: General ledger systems, accounting systems
- Compliance audit: Businesses requiring complete operation records
- Critical business: Any scenario that cannot tolerate data loss
Requirements:
- Zero data loss (RPO = 0)
- Data integrity verification
- Complete audit logs
- Strict security policies
- Acceptable performance trade-offs
Usage
Specify pg_conf = crit.yml in cluster definition:
pg-finance:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-finance
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
Recommendation: Critical clusters should have at least 3 nodes to maintain sync replication when one node fails.
Core Features
Forced Sync Replication
CRIT template forces sync replication regardless of pg_rpo setting:
synchronous_mode: true # forced on, ignores pg_rpo
Every transaction commit waits for at least one replica confirmation, ensuring RPO = 0 (zero data loss).
Cost: Write latency increases (typically 1-5ms depending on network).
Forced Data Checksums
CRIT template forces data checksums regardless of pg_checksum setting:
initdb:
- data-checksums # forced on, ignores pg_checksum
Data checksums detect silent disk corruption (bit rot), critical for financial data.
Disabled Parallel Query
CRIT template disables parallel query gather operations:
max_parallel_workers_per_gather: 0 # parallel queries disabled
Parallel cost estimates are also increased:
parallel_setup_cost: 2000
parallel_tuple_cost: 0.2
min_parallel_table_scan_size: 16MB
min_parallel_index_scan_size: 1024
Reason: Parallel queries may cause unstable latency. For latency-sensitive financial transactions, predictable stable performance is more important.
Parameter Details
Connection Management
max_connections: 500/1000 # depends on pgbouncer usage
superuser_reserved_connections: 10
Same as OLTP template.
Memory Config
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 64MB - 1GB | Same as OLTP |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
WAL Config (Key Differences)
wal_writer_delay: 10ms # OLTP: 20ms, more frequent flush
wal_writer_flush_after: 0 # OLTP: 1MB, immediate flush, no buffer
idle_replication_slot_timeout: 3d # OLTP: 7d, stricter slot cleanup
wal_writer_flush_after: 0 ensures every WAL write flushes to disk immediately, minimizing data loss risk.
Replication Config (PG15-)
vacuum_defer_cleanup_age: 500000 # PG15 and below only
Preserves 500K recent transactions from vacuum cleanup, providing more catchup buffer for replicas.
Audit Logging (Key Differences)
CRIT template enables full connection audit:
PostgreSQL 18+:
log_connections: 'receipt,authentication,authorization'
PostgreSQL 17 and below:
log_connections: 'on'
log_disconnections: 'on'
Records complete connection lifecycle:
- Connection receipt
- Authentication process
- Authorization result
- Disconnection
Query Logging
log_min_duration_statement: 100 # log queries > 100ms
log_statement: ddl # log all DDL
track_activity_query_size: 32768 # OLTP: 8192, capture full queries
32KB track_activity_query_size ensures capturing complete long query text.
Statistics Tracking
track_io_timing: on
track_cost_delay_timing: on # PG18+, track vacuum cost delay
track_functions: all
track_activity_query_size: 32768
Client Timeouts (Key Differences)
idle_in_transaction_session_timeout: 1min # OLTP: 10min, stricter
1-minute idle transaction timeout quickly releases zombie transactions holding locks.
Extension Config
shared_preload_libraries: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
Note: CRIT template loads passwordcheck by default, enforcing password complexity.
Key Differences from OLTP
| Parameter | CRIT | OLTP | Reason |
|---|
| synchronous_mode | Forced true | Depends on pg_rpo | Zero data loss |
| data-checksums | Forced on | Optional | Data integrity |
| max_parallel_workers_per_gather | 0 | 20% cpu | Stable latency |
| wal_writer_delay | 10ms | 20ms | More frequent flush |
| wal_writer_flush_after | 0 | 1MB | Immediate flush |
| idle_replication_slot_timeout | 3d | 7d | Stricter cleanup |
| idle_in_transaction_session_timeout | 1min | 10min | Quick lock release |
| track_activity_query_size | 32KB | 8KB | Complete query capture |
| log_connections | Full logging | Auth only | Audit compliance |
| log_disconnections | on | off | Audit compliance |
| passwordcheck | Enabled | Not enabled | Password security |
| vacuum_defer_cleanup_age | 500000 | 0 | Replica catchup buffer |
Using CRIT template has these impacts:
Increased Write Latency
Sync replication adds 1-5ms write latency (network-dependent):
Async replication: commit -> local flush -> return to client
Sync replication: commit -> local flush -> wait replica confirm -> return to client
Reduced Write Throughput
Due to replica confirmation wait, write TPS may drop 10-30%.
More Stable Query Latency
With parallel queries disabled, query latency is more predictable without parallel startup overhead variance.
Slightly Increased Resource Overhead
More frequent WAL flushes and complete audit logs add extra IO overhead.
HA Configuration
Minimum Recommended Setup
pg-critical:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-critical
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
3-node setup ensures sync replication continues when one node fails.
Cross-DC Deployment
For financial-grade disaster recovery:
pg-critical:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary, pg_weight: 100 } # DC A
10.10.10.12: { pg_seq: 2, pg_role: replica, pg_weight: 100 } # DC A
10.20.10.13: { pg_seq: 3, pg_role: replica, pg_weight: 0 } # DC B (standby)
vars:
pg_cluster: pg-critical
pg_conf: crit.yml # PostgreSQL critical template
node_tune: crit # OS critical tuning
Quorum Commit
For higher consistency, configure multiple sync replicas:
$ pg edit-config pg-critical
synchronous_mode: true
synchronous_node_count: 2 # require 2 replica confirmations
Security Hardening Tips
Password Policy
CRIT template has passwordcheck enabled; further configure:
-- Set password encryption
ALTER SYSTEM SET password_encryption = 'scram-sha-256';
Audit Extension
Consider pgaudit for detailed auditing:
pg_libs: 'pg_stat_statements, auto_explain, pgaudit'
pg_parameters:
pgaudit.log: 'ddl, role, write'
Network Isolation
Ensure database network is isolated; use HBA rules to restrict access.
Monitoring Metrics
For critical clusters, focus on:
- Replication lag: Sync lag should be near zero
- Transaction commit time: p99 latency
- Lock waits: Long lock waits may impact business
- Checkpoints: Checkpoint duration and frequency
- WAL generation rate: Predict disk space needs
References
5 - TINY Template
PostgreSQL config template optimized for micro instances and resource-constrained environments
tiny.yml is optimized for micro instances and resource-constrained environments. Designed for 1-3 core CPUs with minimal resource usage, conservative memory allocation, and disabled parallel queries.
Pair with node_tune = tiny for OS-level tuning.
Use Cases
TINY template is ideal for:
- Dev/test: Local development, CI/CD testing
- Low-spec VMs: 1-2 core CPU, 1-4GB RAM cloud instances
- Edge computing: Raspberry Pi, embedded devices
- Demos: Quick Pigsty experience
- Personal projects: Resource-limited blogs, small apps
Resource constraints:
- 1-3 CPU cores
- 1-8 GB RAM
- Limited disk space
- May share resources with other services
Usage
Specify pg_conf = tiny.yml in cluster definition:
pg-dev:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-dev
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Single-node development:
pg-local:
hosts:
127.0.0.1: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-local
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Parameter Details
Connection Management
max_connections: 250 # OLTP: 500-1000, reduced connection overhead
superuser_reserved_connections: 10
Micro instances don’t need many concurrent connections; 250 is sufficient for dev/test.
Memory Config
TINY template uses conservative memory allocation:
| Parameter | Formula | Description |
|---|
shared_buffers | mem × pg_shared_buffer_ratio | Default ratio 0.25 |
maintenance_work_mem | shared_buffers × 25% | For VACUUM, CREATE INDEX |
work_mem | 16MB - 256MB | Smaller sort/hash memory |
effective_cache_size | total mem - shared_buffers | Estimated cache memory |
work_mem calculation (differs from OLTP):
work_mem = min(max(shared_buffers / max_connections, 16MB), 256MB)
Smaller work_mem limit (256MB vs OLTP’s 1GB) prevents memory exhaustion.
Parallel Query (Fully Disabled)
TINY template completely disables parallel queries:
max_worker_processes: cpu + 4 (min 12) # OLTP: cpu + 8
max_parallel_workers: 50% × cpu (min 1) # OLTP: 50% (min 2)
max_parallel_workers_per_gather: 0 # parallel queries disabled
max_parallel_maintenance_workers: 33% × cpu (min 1)
max_parallel_workers_per_gather: 0 ensures queries won’t spawn parallel workers, avoiding resource contention on low-core systems.
IO Config (PG17+)
io_workers: 3 # fixed value, OLTP: 25% cpu (4-16)
Fixed low IO worker count suitable for resource-constrained environments.
Vacuum Config
vacuum_cost_delay: 20ms
vacuum_cost_limit: 2000
autovacuum_max_workers: 2 # OLTP: 3, one fewer worker
autovacuum_naptime: 1min
# autovacuum_vacuum_scale_factor uses default
# autovacuum_analyze_scale_factor uses default
Fewer autovacuum workers reduce background resource usage.
Query Optimization
random_page_cost: 1.1
effective_io_concurrency: 200
default_statistics_target: 200 # OLTP: 400, lower precision saves space
Lower default_statistics_target reduces pg_statistic table size.
Logging Config
log_min_duration_statement: 100 # same as OLTP
log_statement: ddl
log_checkpoints: on
log_lock_waits: on
log_temp_files: 1024
# log_connections uses default (no extra logging)
TINY template doesn’t enable extra connection logging to reduce log volume.
Client Timeouts
deadlock_timeout: 50ms
idle_in_transaction_session_timeout: 10min # same as OLTP
Extension Config
shared_preload_libraries: 'pg_stat_statements, auto_explain'
pg_stat_statements.max: 2500 # OLTP: 10000, reduced memory usage
pg_stat_statements.track: all
pg_stat_statements.track_utility: off
pg_stat_statements.track_planning: off
pg_stat_statements.max reduced from 10000 to 2500, saving ~75% memory.
Key Differences from OLTP
| Parameter | TINY | OLTP | Reason |
|---|
| max_connections | 250 | 500-1000 | Reduce connection overhead |
| work_mem limit | 256MB | 1GB | Prevent memory exhaustion |
| max_worker_processes | cpu+4 | cpu+8 | Fewer background processes |
| max_parallel_workers_per_gather | 0 | 20% cpu | Disable parallel queries |
| autovacuum_max_workers | 2 | 3 | Reduce background load |
| default_statistics_target | 200 | 400 | Save space |
| pg_stat_statements.max | 2500 | 10000 | Reduce memory usage |
| io_workers | 3 | 25% cpu | Fixed low value |
Resource Estimates
TINY template resource usage by configuration:
1 Core 1GB RAM
shared_buffers: ~256MB
work_mem: ~16MB
maintenance_work_mem: ~64MB
max_connections: 250
max_worker_processes: ~12
PostgreSQL process memory: ~400-600MB
2 Core 4GB RAM
shared_buffers: ~1GB
work_mem: ~32MB
maintenance_work_mem: ~256MB
max_connections: 250
max_worker_processes: ~12
PostgreSQL process memory: ~1.5-2GB
4 Core 8GB RAM
Consider using OLTP template instead:
pg-small:
vars:
pg_conf: oltp.yml # 4C8G can use OLTP template
Further Resource Reduction
For extremely constrained resources:
pg_parameters:
max_connections: 100 # further reduce
shared_buffers: 128MB # further reduce
maintenance_work_mem: 32MB
work_mem: 8MB
Disable Unnecessary Extensions
pg_libs: 'pg_stat_statements' # keep only essential extensions
Disable Unnecessary Features
pg_parameters:
track_io_timing: off # disable IO timing tracking
track_functions: none # disable function tracking
Use External Connection Pool
Even on micro instances, PgBouncer significantly improves concurrency:
pg-tiny:
vars:
pg_conf: tiny.yml
pg_default_service_dest: pgbouncer
pgbouncer_poolmode: transaction
AWS
- t3.micro: 1 vCPU, 1GB RAM - suitable for TINY
- t3.small: 2 vCPU, 2GB RAM - suitable for TINY
- t3.medium: 2 vCPU, 4GB RAM - consider OLTP
Alibaba Cloud
- ecs.t6-c1m1.small: 1 vCPU, 1GB RAM - suitable for TINY
- ecs.t6-c1m2.small: 1 vCPU, 2GB RAM - suitable for TINY
- ecs.t6-c1m4.small: 1 vCPU, 4GB RAM - suitable for TINY
Tencent Cloud
- SA2.SMALL1: 1 vCPU, 1GB RAM - suitable for TINY
- SA2.SMALL2: 1 vCPU, 2GB RAM - suitable for TINY
- SA2.SMALL4: 1 vCPU, 4GB RAM - suitable for TINY
Edge Device Deployment
Raspberry Pi 4
pg-pi:
hosts:
192.168.1.100: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-pi
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
pg_storage_type: SSD # SSD storage recommended
Docker Container
pg-docker:
hosts:
172.17.0.2: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-docker
pg_conf: tiny.yml # PostgreSQL micro instance template
node_tune: tiny # OS micro instance tuning
Upgrading to OLTP
When your application grows and needs more resources, easily upgrade to OLTP template:
- Upgrade VM specs (4 core 8GB+)
- Modify cluster config:
pg-growing:
vars:
pg_conf: oltp.yml # change from tiny.yml to oltp.yml
node_tune: oltp # change from tiny to oltp
- Reconfigure cluster or redeploy
References