Configuration
Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.
Pigsty is a “configuration-driven” PostgreSQL platform: all behaviors come from the combination of inventory files in ~/pigsty/conf/*.yml and PGSQL parameters.
Once you’ve written the configuration, you can replicate a customized cluster with instances, users, databases, access control, extensions, and tuning policies in just a few minutes.
Configuration Entry
- Prepare Inventory: Copy a
pigsty/conf/*.yml template or write an Ansible Inventory from scratch, placing cluster groups (all.children.<cls>.hosts) and global variables (all.vars) in the same file. - Define Parameters: Override the required
PGSQL parameters in the vars block. The override order from global → cluster → host determines the final value. - Apply Configuration: Run
./configure -c <conf> or bin/pgsql-add <cls> and other playbooks to apply the configuration. Pigsty will generate the configuration files needed for Patroni/pgbouncer/pgbackrest based on the parameters.
Pigsty’s default demo inventory conf/pgsql.yml is a minimal example: one pg-meta cluster, global pg_version: 18, and a few business user and database definitions. You can expand with more clusters from this base.
Focus Areas & Documentation Index
Pigsty’s PostgreSQL configuration can be organized from the following dimensions. Subsequent documentation will explain “how to configure” each:
- Cluster & Instances: Define instance topology (standalone, primary-replica, standby cluster, delayed cluster, Citus, etc.) through
pg_cluster / pg_role / pg_seq / pg_upstream. - Kernel Version: Select the core version, flavor, and tuning templates using
pg_version, pg_mode, pg_packages, pg_extensions, pg_conf, and other parameters. - Users/Roles: Declare system roles, business accounts, password policies, and connection pool attributes in
pg_default_roles and pg_users. - Database Objects: Create databases as needed using
pg_databases, baseline, schemas, extensions, pool_* fields and automatically integrate with pgbouncer/Grafana. - Access Control (HBA): Maintain host-based authentication policies using
pg_default_hba_rules and pg_hba_rules to ensure access boundaries for different roles/networks. - Privilege Model (ACL): Converge object privileges through
pg_default_privileges, pg_default_roles, pg_revoke_public parameters, providing an out-of-the-box layered role system.
After understanding these parameters, you can write declarative inventory manifests as “configuration as infrastructure” for any business requirement. Pigsty will handle execution and ensure idempotency.
A Typical Example
The following snippet shows how to control instance topology, kernel version, extensions, users, and databases in the same configuration file:
all:
children:
pg-analytics:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica, pg_offline_query: true }
vars:
pg_cluster: pg-analytics
pg_conf: olap.yml
pg_extensions: [ postgis, timescaledb, pgvector ]
pg_databases:
- { name: bi, owner: dbuser_bi, schemas: [mart], extensions: [timescaledb], pool_mode: session }
pg_users:
- { name: dbuser_bi, password: DBUser.BI, roles: [dbrole_admin], pgbouncer: true }
vars:
pg_version: 18
pg_packages: [ pgsql-main pgsql-common ]
pg_hba_rules:
- { user: dbuser_bi, db: bi, addr: intra, auth: ssl, title: 'BI only allows intranet SSL access' }
- The
pg-analytics cluster contains one primary and one offline replica. - Global settings specify
pg_version: 18 with a set of extension examples and load olap.yml tuning. - Declare business objects in
pg_databases and pg_users, automatically generating schema/extension and connection pool entries. - Additional
pg_hba_rules restrict access sources and authentication methods.
Modify and apply this inventory to get a customized PostgreSQL cluster without manual configuration.
1 - Cluster / Instance
Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.
Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.
You can define different types of instances and clusters. Here are several common PostgreSQL instance/cluster types in Pigsty:
- Primary: Define a single instance cluster.
- Replica: Define a basic HA cluster with one primary and one replica.
- Offline: Define an instance dedicated to OLAP/ETL/interactive queries
- Sync Standby: Enable synchronous commit to ensure no data loss.
- Quorum Commit: Use quorum sync commit for a higher consistency level.
- Standby Cluster: Clone an existing cluster and follow it
- Delayed Cluster: Clone an existing cluster for emergency data recovery
- Citus Cluster: Define a Citus distributed database cluster
Primary
We start with the simplest case: a single instance cluster consisting of one primary:
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars:
pg_cluster: pg-test
This configuration is concise and self-describing, consisting only of identity parameters. Note that the Ansible Group name should match pg_cluster.
Use the following command to create this cluster:
For demos, development testing, hosting temporary requirements, or performing non-critical analytical tasks, a single database instance may not be a big problem. However, such a single-node cluster has no high availability. When hardware failures occur, you’ll need to use PITR or other recovery methods to ensure the cluster’s RTO/RPO. For this reason, you may consider adding several read-only replicas to the cluster.
Replica
To add a read-only replica instance, you can add a new node to pg-test and set its pg_role to replica.
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica } # <--- newly added replica
vars:
pg_cluster: pg-test
If the entire cluster doesn’t exist, you can directly create the complete cluster. If the cluster primary has already been initialized, you can add a replica to the existing cluster:
bin/pgsql-add pg-test # initialize the entire cluster at once
bin/pgsql-add pg-test 10.10.10.12 # add replica to existing cluster
When the cluster primary fails, the read-only instance (Replica) can take over the primary’s work with the help of the high availability system. Additionally, read-only instances can be used to execute read-only queries: many businesses have far more read requests than write requests, and most read-only query loads can be handled by replica instances.
Offline
Offline instances are dedicated read-only replicas specifically for serving slow queries, ETL, OLAP traffic, and interactive queries. Slow queries/long transactions have adverse effects on the performance and stability of online business, so it’s best to isolate them from online business.
To add an offline instance, assign it a new instance and set pg_role to offline.
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: offline } # <--- newly added offline replica
vars:
pg_cluster: pg-test
Dedicated offline instances work similarly to common replica instances, but they serve as backup servers in the pg-test-replica service. That is, only when all replica instances are down will the offline and primary instances provide this read-only service.
In many cases, database resources are limited, and using a separate server as an offline instance is not economical. As a compromise, you can select an existing replica instance and mark it with the pg_offline_query flag to indicate it can handle “offline queries”. In this case, this read-only replica will handle both online read-only requests and offline queries. You can use pg_default_hba_rules and pg_hba_rules for additional access control on offline instances.
Sync Standby
When Sync Standby is enabled, PostgreSQL will select one replica as the sync standby, with all other replicas as candidates. The primary database will wait for the standby instance to flush to disk before confirming commits. The standby instance always has the latest data with no replication lag, and primary-standby switchover to the sync standby will have no data loss.
PostgreSQL uses asynchronous streaming replication by default, which may have small replication lag (on the order of 10KB/10ms). When the primary fails, there may be a small data loss window (which can be controlled using pg_rpo), but this is acceptable for most scenarios.
However, in some critical scenarios (e.g., financial transactions), data loss is completely unacceptable, or read replication lag is unacceptable. In such cases, you can use synchronous commit to solve this problem. To enable sync standby mode, you can simply use the crit.yml template in pg_conf.
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
10.10.10.13: { pg_seq: 3, pg_role: replica }
vars:
pg_cluster: pg-test
pg_conf: crit.yml # <--- use crit template
To enable sync standby on an existing cluster, configure the cluster and enable synchronous_mode:
$ pg edit-config pg-test # run as admin user on admin node
+++
-synchronous_mode: false # <--- old value
+synchronous_mode: true # <--- new value
synchronous_mode_strict: false
Apply these changes? [y/N]: y
In this case, the PostgreSQL configuration parameter synchronous_standby_names is automatically managed by Patroni.
One replica will be elected as the sync standby, and its application_name will be written to the PostgreSQL primary configuration file and applied.
Quorum Commit
Quorum Commit provides more powerful control than sync standby: especially when you have multiple replicas, you can set criteria for successful commits, achieving higher/lower consistency levels (and trade-offs with availability).
If you want at least two replicas to confirm commits, you can adjust the synchronous_node_count parameter through Patroni cluster configuration and apply it:
synchronous_mode: true # ensure synchronous commit is enabled
synchronous_node_count: 2 # specify "at least" how many replicas must successfully commit
If you want to use more sync replicas, modify the synchronous_node_count value. When the cluster size changes, you should ensure this configuration is still valid to avoid service unavailability.
In this case, the PostgreSQL configuration parameter synchronous_standby_names is automatically managed by Patroni.
synchronous_standby_names = '2 ("pg-test-3","pg-test-2")'
Example: Using multiple sync standbys
$ pg edit-config pg-test
---
+synchronous_node_count: 2
Apply these changes? [y/N]: y
After applying the configuration, two sync standbys appear.
+ Cluster: pg-test (7080814403632534854) +---------+----+-----------+-----------------+
| Member | Host | Role | State | TL | Lag in MB | Tags |
+-----------+-------------+--------------+---------+----+-----------+-----------------+
| pg-test-1 | 10.10.10.10 | Leader | running | 1 | | clonefrom: true |
| pg-test-2 | 10.10.10.11 | Sync Standby | running | 1 | 0 | clonefrom: true |
| pg-test-3 | 10.10.10.12 | Sync Standby | running | 1 | 0 | clonefrom: true |
+-----------+-------------+--------------+---------+----+-----------+-----------------+
Another scenario is using any n replicas to confirm commits. In this case, the configuration is slightly different. For example, if we only need any one replica to confirm commits:
synchronous_mode: quorum # use quorum commit
postgresql:
parameters: # modify PostgreSQL's configuration parameter synchronous_standby_names, using `ANY n ()` syntax
synchronous_standby_names: 'ANY 1 (*)' # you can specify a specific replica list or use * to wildcard all replicas.
Example: Enable ANY quorum commit
$ pg edit-config pg-test
+ synchronous_standby_names: 'ANY 1 (*)' # in ANY mode, this parameter is needed
- synchronous_node_count: 2 # in ANY mode, this parameter is not needed
Apply these changes? [y/N]: y
After applying, the configuration takes effect, and all standbys become regular replicas in Patroni. However, in pg_stat_replication, you can see sync_state becomes quorum.
Standby Cluster
You can clone an existing cluster and create a standby cluster for data migration, horizontal splitting, multi-region deployment, or disaster recovery.
Under normal circumstances, the standby cluster will follow the upstream cluster and keep content synchronized. You can promote the standby cluster to become a truly independent cluster.
The standby cluster definition is basically the same as a normal cluster definition, except that the pg_upstream parameter is additionally defined on the primary. The primary of the standby cluster is called the Standby Leader.
For example, below defines a pg-test cluster and its standby cluster pg-test2. The configuration inventory might look like this:
# pg-test is the original cluster
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars: { pg_cluster: pg-test }
# pg-test2 is the standby cluster of pg-test
pg-test2:
hosts:
10.10.10.12: { pg_seq: 1, pg_role: primary , pg_upstream: 10.10.10.11 } # <--- pg_upstream defined here
10.10.10.13: { pg_seq: 2, pg_role: replica }
vars: { pg_cluster: pg-test2 }
The primary node pg-test2-1 of the pg-test2 cluster will be a downstream replica of pg-test and serve as the Standby Leader in the pg-test2 cluster.
Just ensure the pg_upstream parameter is configured on the standby cluster’s primary node to automatically pull backups from the original upstream.
bin/pgsql-add pg-test # create original cluster
bin/pgsql-add pg-test2 # create standby cluster
Example: Change replication upstream
If necessary (e.g., upstream primary-standby switchover/failover), you can change the standby cluster’s replication upstream through cluster configuration.
To do this, simply change standby_cluster.host to the new upstream IP address and apply.
$ pg edit-config pg-test2
standby_cluster:
create_replica_methods:
- basebackup
- host: 10.10.10.13 # <--- old upstream
+ host: 10.10.10.12 # <--- new upstream
port: 5432
Apply these changes? [y/N]: y
Example: Promote standby cluster
You can promote the standby cluster to an independent cluster at any time, so the cluster can independently handle write requests and diverge from the original cluster.
To do this, you must configure the cluster and completely erase the standby_cluster section, then apply.
$ pg edit-config pg-test2
-standby_cluster:
- create_replica_methods:
- - basebackup
- host: 10.10.10.11
- port: 5432
Apply these changes? [y/N]: y
Example: Cascade replication
If you specify pg_upstream on a replica instead of the primary, you can configure cascade replication for the cluster.
When configuring cascade replication, you must use the IP address of an instance in the cluster as the parameter value, otherwise initialization will fail. The replica performs streaming replication from a specific instance rather than the primary.
The instance acting as a WAL relay is called a Bridge Instance. Using a bridge instance can share the burden of sending WAL from the primary. When you have dozens of replicas, using bridge instance cascade replication is a good idea.
pg-test:
hosts: # pg-test-1 ---> pg-test-2 ---> pg-test-3
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica } # <--- bridge instance
10.10.10.13: { pg_seq: 3, pg_role: replica, pg_upstream: 10.10.10.12 }
# ^--- replicate from pg-test-2 (bridge) instead of pg-test-1 (primary)
vars: { pg_cluster: pg-test }
Delayed Cluster
A Delayed Cluster is a special type of standby cluster used to quickly recover “accidentally deleted” data.
For example, if you want a cluster named pg-testdelay whose data content is the same as the pg-test cluster from one hour ago:
# pg-test is the original cluster
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
vars: { pg_cluster: pg-test }
# pg-testdelay is the delayed cluster of pg-test
pg-testdelay:
hosts:
10.10.10.12: { pg_seq: 1, pg_role: primary , pg_upstream: 10.10.10.11, pg_delay: 1d }
10.10.10.13: { pg_seq: 2, pg_role: replica }
vars: { pg_cluster: pg-testdelay }
You can also configure a “replication delay” on an existing standby cluster.
$ pg edit-config pg-testdelay
standby_cluster:
create_replica_methods:
- basebackup
host: 10.10.10.11
port: 5432
+ recovery_min_apply_delay: 1h # <--- add delay duration here, e.g. 1 hour
Apply these changes? [y/N]: y
When some tuples and tables are accidentally deleted, you can modify this parameter to advance this delayed cluster to an appropriate point in time, read data from it, and quickly fix the original cluster.
Delayed clusters require additional resources, but are much faster than PITR and have much less impact on the system. For very critical clusters, consider setting up delayed clusters.
Citus Cluster
Pigsty natively supports Citus. You can refer to files/pigsty/citus.yml and prod.yml as examples.
To define a Citus cluster, you need to specify the following parameters:
pg_mode must be set to citus, not the default pgsql- The shard name
pg_shard and shard number pg_group must be defined on each shard cluster pg_primary_db must be defined to specify the database managed by Patroni.- If you want to use
pg_dbsu postgres instead of the default pg_admin_username to execute admin commands, then pg_dbsu_password must be set to a non-empty plaintext password
Additionally, extra hba rules are needed to allow SSL access from localhost and other data nodes. As shown below:
all:
children:
pg-citus0: # citus shard 0
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars: { pg_cluster: pg-citus0 , pg_group: 0 }
pg-citus1: # citus shard 1
hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
vars: { pg_cluster: pg-citus1 , pg_group: 1 }
pg-citus2: # citus shard 2
hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
vars: { pg_cluster: pg-citus2 , pg_group: 2 }
pg-citus3: # citus shard 3
hosts:
10.10.10.13: { pg_seq: 1, pg_role: primary }
10.10.10.14: { pg_seq: 2, pg_role: replica }
vars: { pg_cluster: pg-citus3 , pg_group: 3 }
vars: # global parameters for all Citus clusters
pg_mode: citus # pgsql cluster mode must be set to: citus
pg_shard: pg-citus # citus horizontal shard name: pg-citus
pg_primary_db: meta # citus database name: meta
pg_dbsu_password: DBUser.Postgres # if using dbsu, need to configure a password for it
pg_users: [ { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
pg_databases: [ { name: meta ,extensions: [ { name: citus }, { name: postgis }, { name: timescaledb } ] } ]
pg_hba_rules:
- { user: 'all' ,db: all ,addr: 127.0.0.1/32 ,auth: ssl ,title: 'all user ssl access from localhost' }
- { user: 'all' ,db: all ,addr: intra ,auth: ssl ,title: 'all user ssl access from intranet' }
On the coordinator node, you can create distributed tables and reference tables and query them from any data node. Starting from 11.2, any Citus database node can act as a coordinator.
SELECT create_distributed_table('pgbench_accounts', 'aid'); SELECT truncate_local_data_after_distributing_table($$public.pgbench_accounts$$);
SELECT create_reference_table('pgbench_branches') ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_branches$$);
SELECT create_reference_table('pgbench_history') ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_history$$);
SELECT create_reference_table('pgbench_tellers') ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_tellers$$);
2 - Kernel Version
How to choose the appropriate PostgreSQL kernel and major version.
Choosing a “kernel” in Pigsty means determining the PostgreSQL major version, mode/distribution, packages to install, and tuning templates to load.
Pigsty v4.1 currently supports PostgreSQL 13-18 and uses 18 by default. The following content shows how to make these choices through configuration files.
Major Version and Packages
pg_version: Specify the PostgreSQL major version (default 18). Pigsty will automatically map to the correct package name prefix based on the version.pg_packages: Define the core package set to install, supports using package aliases (default pgsql-main pgsql-common, includes kernel + patroni/pgbouncer/pgbackrest and other common tools).pg_extensions: List of additional extension packages to install, also supports aliases; defaults to empty meaning only core dependencies are installed.
all:
vars:
pg_version: 18
pg_packages: [ pgsql-main pgsql-common ]
pg_extensions: [ postgis, timescaledb, pgvector, pgml ]
Effect: Ansible will pull packages corresponding to pg_version=18 during installation, pre-install extensions to the system, and database initialization scripts can then directly CREATE EXTENSION.
Extension support varies across versions in Pigsty’s offline repository: 13 has relatively fewer available extensions, while 17/18 have the broadest coverage. If an extension is not pre-packaged, it can be added via repo_packages_extra.
Kernel Mode (pg_mode)
pg_mode controls the kernel “flavor” to deploy. Default pgsql indicates standard PostgreSQL. Pigsty currently supports the following modes:
| Mode | Scenario |
|---|
pgsql | Standard PostgreSQL, HA + replication |
citus | Citus distributed cluster, requires additional pg_shard / pg_group |
gpsql | Greenplum / MatrixDB |
mssql | Babelfish for PostgreSQL |
mysql | OpenGauss/HaloDB compatible with MySQL protocol |
polar | Alibaba PolarDB (based on pg polar distribution) |
ivory | IvorySQL (Oracle-compatible syntax) |
oriole | OrioleDB storage engine |
oracle | PostgreSQL + ora compatibility (pg_mode: oracle) |
After selecting a mode, Pigsty will automatically load corresponding templates, dependency packages, and Patroni configurations. For example, deploying Citus:
all:
children:
pg-citus0:
hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
vars: { pg_cluster: pg-citus0, pg_group: 0 }
pg-citus1:
hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
vars: { pg_cluster: pg-citus1, pg_group: 1 }
vars:
pg_mode: citus
pg_shard: pg-citus
patroni_citus_db: meta
Effect: All members will install Citus-related packages, Patroni writes to etcd in shard mode, and automatically CREATE EXTENSION citus in the meta database.
Extensions and Pre-installed Objects
Besides system packages, you can control components automatically loaded after database startup through the following parameters:
pg_libs: List to write to shared_preload_libraries. For example: pg_libs: 'timescaledb, pg_stat_statements, auto_explain'.pg_default_extensions / pg_default_schemas: Control schemas and extensions pre-created in template1 and postgres by initialization scripts.pg_parameters: Append ALTER SYSTEM SET for all instances (written to postgresql.auto.conf).
Example: Enable TimescaleDB, pgvector and customize some system parameters.
pg-analytics:
vars:
pg_cluster: pg-analytics
pg_libs: 'timescaledb, pg_stat_statements, pgml'
pg_default_extensions:
- { name: timescaledb }
- { name: pgvector }
pg_parameters:
timescaledb.max_background_workers: 8
shared_preload_libraries: "'timescaledb,pg_stat_statements,pgml'"
Effect: During initialization, template1 creates extensions, Patroni’s postgresql.conf injects corresponding parameters, and all business databases inherit these settings.
Tuning Template (pg_conf)
pg_conf points to Patroni templates in roles/pgsql/templates/*.yml. Pigsty includes four built-in general templates:
| Template | Applicable Scenario |
|---|
oltp.yml | Default template, for 4–128 core TP workload |
olap.yml | Optimized for analytical scenarios |
crit.yml | Emphasizes sync commit/minimal latency, suitable for zero-loss scenarios like finance |
tiny.yml | Lightweight machines / edge scenarios / resource-constrained environments |
You can directly replace the template or customize a YAML file in templates/, then specify it in cluster vars.
pg-ledger:
hosts: { 10.10.10.21: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-ledger
pg_conf: crit.yml
pg_parameters:
synchronous_commit: 'remote_apply'
max_wal_senders: 16
wal_keep_size: '2GB'
Effect: Copy crit.yml as Patroni configuration, overlay pg_parameters written to postgresql.auto.conf, making instances run immediately in synchronous commit mode.
Combined Instance: A Complete Example
pg-rag:
hosts:
10.10.10.31: { pg_seq: 1, pg_role: primary }
10.10.10.32: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-rag
pg_version: 18
pg_mode: pgsql
pg_conf: olap.yml
pg_packages: [ pgsql-main pgsql-common ]
pg_extensions: [ pgvector, pgml, postgis ]
pg_libs: 'pg_stat_statements, pgvector, pgml'
pg_parameters:
max_parallel_workers: 8
shared_buffers: '32GB'
- First primary + one replica, using
olap.yml tuning. - Install PG18 + RAG common extensions, automatically load
pgvector/pgml at system level. - Patroni/pgbouncer/pgbackrest generated by Pigsty, no manual intervention needed.
Replace the above parameters according to business needs to complete all kernel-level customization.
3 - Package Alias
Pigsty provides a package alias translation mechanism that shields the differences in binary package details across operating systems, making installation easier.
PostgreSQL package naming conventions vary significantly across different operating systems:
- EL systems (RHEL/Rocky/Alma/…) use formats like
pgvector_18, postgis36_18* - Debian/Ubuntu systems use formats like
postgresql-18-pgvector, postgresql-18-postgis-3
This difference adds cognitive burden to users: you need to remember different package name rules for different systems, and handle the embedding of PostgreSQL version numbers.
Package Alias
Pigsty solves this problem through the Package Alias mechanism: you only need to use unified aliases, and Pigsty will handle all the details:
# Using aliases - simple, unified, cross-platform
pg_extensions: [ postgis, pgvector, timescaledb ]
# Equivalent to actual package names on EL9 + PG18
pg_extensions: [ postgis36_18*, pgvector_18*, timescaledb-tsl_18* ]
# Equivalent to actual package names on Ubuntu 24 + PG18
pg_extensions: [ postgresql-18-postgis-3, postgresql-18-pgvector, postgresql-18-timescaledb-tsl ]
Alias Translation
Aliases can also group a set of packages as a whole. For example, Pigsty’s default installed packages - the default value of pg_packages is:
pg_packages: # pg packages to be installed, alias can be used
- pgsql-main pgsql-common
Pigsty will query the current operating system alias list (assuming el10.x86_64) and translate it to PGSQL kernel, extensions, and toolkits:
pgsql-main: "postgresql$v postgresql$v-server postgresql$v-libs postgresql$v-contrib postgresql$v-plperl postgresql$v-plpython3 postgresql$v-pltcl postgresql$v-llvmjit pg_repack_$v* wal2json_$v* pgvector_$v*"
pgsql-common: "patroni patroni-etcd pgbouncer pgbackrest pg_exporter pgbackrest_exporter vip-manager"
Next, Pigsty further translates pgsql-main using the currently specified PG major version (assuming pg_version = 18):
pg18-main: "postgresql18 postgresql18-server postgresql18-libs postgresql18-contrib postgresql18-plperl postgresql18-plpython3 postgresql18-pltcl postgresql18-llvmjit pg_repack_18* wal2json_18* pgvector_18*"
Through this approach, Pigsty shields the complexity of packages, allowing users to simply specify the functional components they want.
Which Variables Can Use Aliases?
You can use package aliases in the following four parameters, and the aliases will be automatically converted to actual package names according to the translation process:
Alias List
You can find the alias mapping files for each operating system and architecture in the roles/node_id/vars/ directory of the Pigsty project source code:
How It Works
Alias Translation Process
User config alias --> Detect OS --> Find alias mapping table ---> Replace $v placeholder ---> Install actual packages
↓ ↓ ↓ ↓
postgis el9.x86_64 postgis36_$v* postgis36_18*
postgis u24.x86_64 postgresql-$v-postgis-3 postgresql-18-postgis-3
Version Placeholder
Pigsty’s alias system uses $v as a placeholder for the PostgreSQL version number. When you specify a PostgreSQL version using pg_version, all $v in aliases will be replaced with the actual version number.
For example, when pg_version: 18:
| Alias Definition (EL) | Expanded Result |
|---|
postgresql$v* | postgresql18* |
pgvector_$v* | pgvector_18* |
timescaledb-tsl_$v* | timescaledb-tsl_18* |
| Alias Definition (Debian/Ubuntu) | Expanded Result |
|---|
postgresql-$v | postgresql-18 |
postgresql-$v-pgvector | postgresql-18-pgvector |
postgresql-$v-timescaledb-tsl | postgresql-18-timescaledb-tsl |
Wildcard Matching
On EL systems, many aliases use the * wildcard to match related subpackages. For example:
postgis36_18* will match postgis36_18, postgis36_18-client, postgis36_18-utils, etc.postgresql18* will match postgresql18, postgresql18-server, postgresql18-libs, postgresql18-contrib, etc.
This design ensures you don’t need to list each subpackage individually - one alias can install the complete extension.
4 - User/Role
How to define and customize PostgreSQL users and roles through configuration?
In this document, “user” refers to a logical object within a database cluster created with CREATE USER/ROLE.
In PostgreSQL, users belong directly to the database cluster rather than a specific database. Therefore, when creating business databases and users, follow the principle of “users first, databases later”.
Pigsty defines roles and users through two config parameters:
The former defines roles/users shared across the entire environment; the latter defines business roles/users specific to a single cluster. Both have the same format as arrays of user definition objects.
Users/roles are created sequentially in array order, so later users can belong to roles defined earlier.
By default, all users marked with pgbouncer: true are added to the Pgbouncer connection pool user list.
Define Users
Example from Pigsty demo pg-meta cluster:
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_users:
- {name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: pigsty admin user }
- {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
- {name: dbuser_grafana ,password: DBUser.Grafana ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for grafana database }
- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database }
- {name: dbuser_kong ,password: DBUser.Kong ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for kong api gateway }
- {name: dbuser_gitea ,password: DBUser.Gitea ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for gitea service }
- {name: dbuser_wiki ,password: DBUser.Wiki ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for wiki.js service }
- {name: dbuser_noco ,password: DBUser.Noco ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for nocodb service }
- {name: dbuser_remove ,state: absent } # use state: absent to delete user
Each user/role definition is a complex object. Only name is required:
- name: dbuser_meta # REQUIRED, `name` is the only mandatory field
state: create # Optional, user state: create (default), absent
password: DBUser.Meta # Optional, password, can be scram-sha-256 hash or plaintext
login: true # Optional, can login, default true
superuser: false # Optional, is superuser, default false
createdb: false # Optional, can create databases, default false
createrole: false # Optional, can create roles, default false
inherit: true # Optional, inherit role privileges, default true
replication: false # Optional, can replicate, default false
bypassrls: false # Optional, bypass row-level security, default false
connlimit: -1 # Optional, connection limit, default -1 (unlimited)
expire_in: 3650 # Optional, expire N days from creation (priority over expire_at)
expire_at: '2030-12-31' # Optional, expiration date in YYYY-MM-DD format
comment: pigsty admin user # Optional, user comment
roles: [dbrole_admin] # Optional, roles array
parameters: # Optional, role-level config params
search_path: public
pgbouncer: true # Optional, add to connection pool user list, default false
pool_mode: transaction # Optional, pgbouncer pool mode, default transaction
pool_connlimit: -1 # Optional, user-level max pool connections, default -1
User-level pool quota is consistently defined by pool_connlimit (mapped to Pgbouncer max_user_connections).
Parameter Overview
The only required field is name - a valid, unique username within the cluster. All other params have sensible defaults.
| Field | Category | Type | Attr | Description |
|---|
name | Basic | string | Required | Username, must be valid and unique |
state | Basic | enum | Optional | State: create (default), absent |
password | Basic | string | Mutable | User password, plaintext or hash |
comment | Basic | string | Mutable | User comment |
login | Privilege | bool | Mutable | Can login, default true |
superuser | Privilege | bool | Mutable | Is superuser, default false |
createdb | Privilege | bool | Mutable | Can create databases, default false |
createrole | Privilege | bool | Mutable | Can create roles, default false |
inherit | Privilege | bool | Mutable | Inherit role privileges, default true |
replication | Privilege | bool | Mutable | Can replicate, default false |
bypassrls | Privilege | bool | Mutable | Bypass RLS, default false |
connlimit | Privilege | int | Mutable | Connection limit, -1 unlimited |
expire_in | Validity | int | Mutable | Expire N days from now (priority) |
expire_at | Validity | string | Mutable | Expiration date, YYYY-MM-DD format |
roles | Role | array | Additive | Roles array, string or object format |
parameters | Params | object | Mutable | Role-level parameters |
pgbouncer | Pool | bool | Mutable | Add to connection pool, default false |
pool_mode | Pool | enum | Mutable | Pool mode: transaction (default) |
pool_connlimit | Pool | int | Mutable | Pool user max connections |
Parameter Details
name
String, required. Username - must be unique within the cluster.
Must be a valid PostgreSQL identifier matching ^[a-z_][a-z0-9_]{0,62}$: starts with lowercase letter or underscore, contains only lowercase letters, digits, underscores, max 63 chars.
- name: dbuser_app # Standard naming
- name: app_readonly # Underscore separated
- name: _internal # Underscore prefix (for internal roles)
state
Enum for user operation: create or absent. Default create.
| State | Description |
|---|
create | Default, create user, update if exists |
absent | Delete user with DROP ROLE |
- name: dbuser_app # state defaults to create
- name: dbuser_old
state: absent # Delete user
These system users cannot be deleted via state: absent (to prevent cluster failure):
password
String, mutable. User password - users without password can’t login via password auth.
Password can be:
| Format | Example | Description |
|---|
| Plaintext | DBUser.Meta | Not recommended, logged to config |
| SCRAM-SHA-256 | SCRAM-SHA-256$4096:xxx$yyy:zzz | Recommended, PG10+ default |
| MD5 hash | md5... | Legacy compatibility |
# Plaintext (not recommended, logged to config)
- name: dbuser_app
password: MySecretPassword
# SCRAM-SHA-256 hash (recommended)
- name: dbuser_app
password: 'SCRAM-SHA-256$4096:xxx$yyy:zzz'
When setting password, Pigsty temporarily disables logging to prevent leakage:
SET log_statement TO 'none';
ALTER USER "dbuser_app" PASSWORD 'xxx';
SET log_statement TO DEFAULT;
To generate SCRAM-SHA-256 hash:
# Using PostgreSQL (requires pgcrypto extension)
psql -c "SELECT encode(digest('password' || 'username', 'sha256'), 'hex')"
String, mutable. User comment, defaults to business user {name}.
Set via COMMENT ON ROLE, supports special chars (quotes auto-escaped).
- name: dbuser_app
comment: 'Main business application account'
COMMENT ON ROLE "dbuser_app" IS 'Main business application account';
login
Boolean, mutable. Can login, default true.
Setting false creates a Role rather than User - typically for permission grouping.
In PostgreSQL, CREATE USER equals CREATE ROLE ... LOGIN.
# Create login-able user
- name: dbuser_app
login: true
# Create role (no login, for permission grouping)
- name: dbrole_custom
login: false
comment: custom permission role
CREATE USER "dbuser_app" LOGIN;
CREATE USER "dbrole_custom" NOLOGIN;
superuser
Boolean, mutable. Is superuser, default false.
Superusers have full database privileges, bypassing all permission checks.
- name: dbuser_admin
superuser: true # Dangerous: full privileges
ALTER USER "dbuser_admin" SUPERUSER;
Pigsty provides default superuser via pg_admin_username (dbuser_dba). Don’t create additional superusers unless necessary.
createdb
Boolean, mutable. Can create databases, default false.
- name: dbuser_dev
createdb: true # Allow create database
ALTER USER "dbuser_dev" CREATEDB;
Some applications (Gitea, Odoo, etc.) may require CREATEDB privilege for their admin users.
createrole
Boolean, mutable. Can create other roles, default false.
Users with CREATEROLE can create, modify, delete other non-superuser roles.
- name: dbuser_admin
createrole: true # Allow manage other roles
ALTER USER "dbuser_admin" CREATEROLE;
inherit
Boolean, mutable. Auto-inherit privileges from member roles, default true.
Setting false requires explicit SET ROLE to use member role privileges.
# Auto-inherit role privileges (default)
- name: dbuser_app
inherit: true
roles: [dbrole_readwrite]
# Requires explicit SET ROLE
- name: dbuser_special
inherit: false
roles: [dbrole_admin]
ALTER USER "dbuser_special" NOINHERIT;
-- User must execute SET ROLE dbrole_admin to get privileges
replication
Boolean, mutable. Can initiate streaming replication, default false.
Usually only replication users (replicator) need this. Normal users shouldn’t have it unless for logical decoding subscriptions.
- name: replicator
replication: true # Allow streaming replication
roles: [pg_monitor, dbrole_readonly]
ALTER USER "replicator" REPLICATION;
bypassrls
Boolean, mutable. Bypass row-level security (RLS) policies, default false.
When enabled, user can access all rows even with RLS policies. Usually only for admins.
- name: dbuser_myappadmin
bypassrls: true # Bypass RLS policies
ALTER USER "dbuser_myappadmin" BYPASSRLS;
connlimit
Integer, mutable. Max concurrent connections, default -1 (unlimited).
Positive integer limits max simultaneous sessions for this user. Doesn’t affect superusers.
- name: dbuser_app
connlimit: 100 # Max 100 concurrent connections
- name: dbuser_batch
connlimit: 10 # Limit batch user connections
ALTER USER "dbuser_app" CONNECTION LIMIT 100;
expire_in
Integer, mutable. Expire N days from current date.
This param has higher priority than expire_at. Expiration recalculated on each playbook run - good for temp users needing periodic renewal.
- name: temp_user
expire_in: 30 # Expire in 30 days
- name: contractor_user
expire_in: 90 # Expire in 90 days
Generates SQL:
-- expire_in: 30, assuming current date is 2025-01-01
ALTER USER "temp_user" VALID UNTIL '2025-01-31';
expire_at
String, mutable. Expiration date in YYYY-MM-DD format, or special value infinity.
Lower priority than expire_in. Use infinity for never-expiring users.
- name: contractor_user
expire_at: '2024-12-31' # Expire on specific date
- name: permanent_user
expire_at: 'infinity' # Never expires
ALTER USER "contractor_user" VALID UNTIL '2024-12-31';
ALTER USER "permanent_user" VALID UNTIL 'infinity';
roles
Array, additive. Roles this user belongs to. Elements can be strings or objects.
Simple format - strings for role names:
- name: dbuser_app
roles:
- dbrole_readwrite
- pg_read_all_data
GRANT "dbrole_readwrite" TO "dbuser_app";
GRANT "pg_read_all_data" TO "dbuser_app";
Full format - objects for fine-grained control:
- name: dbuser_app
roles:
- dbrole_readwrite # Simple string: GRANT role
- { name: dbrole_admin, admin: true } # WITH ADMIN OPTION
- { name: pg_monitor, set: false } # PG16+: disallow SET ROLE
- { name: pg_signal_backend, inherit: false } # PG16+: don't auto-inherit
- { name: old_role, state: absent } # Revoke role membership
Object Format Parameters:
| Param | Type | Description |
|---|
name | string | Role name (required) |
state | enum | grant (default) or absent/revoke: control membership |
admin | bool | true: WITH ADMIN OPTION, false: REVOKE ADMIN |
set | bool | PG16+: true: WITH SET TRUE, false: REVOKE SET |
inherit | bool | PG16+: true: WITH INHERIT TRUE, false: REVOKE INHERIT |
PostgreSQL 16+ New Features:
PostgreSQL 16 introduced finer-grained role membership control:
- ADMIN OPTION: Allow granting role to other users
- SET OPTION: Allow using
SET ROLE to switch to this role - INHERIT OPTION: Auto-inherit this role’s privileges
# PostgreSQL 16+ complete example
- name: dbuser_app
roles:
# Normal membership
- dbrole_readwrite
# Can grant dbrole_admin to other users
- { name: dbrole_admin, admin: true }
# Cannot SET ROLE to pg_monitor (only inherit privileges)
- { name: pg_monitor, set: false }
# Don't auto-inherit pg_execute_server_program (need explicit SET ROLE)
- { name: pg_execute_server_program, inherit: false }
# Revoke old_role membership
- { name: old_role, state: absent }
set and inherit options only work in PG16+. On earlier versions they’re ignored with warning comments.
parameters
Object, mutable. Role-level config params via ALTER ROLE ... SET. Applies to all sessions for this user.
- name: dbuser_analyst
parameters:
work_mem: '256MB'
statement_timeout: '5min'
search_path: 'analytics,public'
log_statement: 'all'
ALTER USER "dbuser_analyst" SET "work_mem" = '256MB';
ALTER USER "dbuser_analyst" SET "statement_timeout" = '5min';
ALTER USER "dbuser_analyst" SET "search_path" = 'analytics,public';
ALTER USER "dbuser_analyst" SET "log_statement" = 'all';
Use special value DEFAULT (case-insensitive) to reset to PostgreSQL default:
- name: dbuser_app
parameters:
work_mem: DEFAULT # Reset to default
statement_timeout: '30s' # Set new value
ALTER USER "dbuser_app" SET "work_mem" = DEFAULT;
ALTER USER "dbuser_app" SET "statement_timeout" = '30s';
Common role-level params:
| Parameter | Description | Example |
|---|
work_mem | Query work memory | '64MB' |
statement_timeout | Statement timeout | '30s' |
lock_timeout | Lock wait timeout | '10s' |
idle_in_transaction_session_timeout | Idle transaction timeout | '10min' |
search_path | Schema search path | 'app,public' |
log_statement | Log level | 'ddl' |
temp_file_limit | Temp file size limit | '10GB' |
Query user-level params via pg_db_role_setting system view.
pgbouncer
Boolean, mutable. Add user to Pgbouncer user list, default false.
For prod users needing connection pool access, must explicitly set pgbouncer: true.
Default false prevents accidentally exposing internal users to the pool.
# Prod user: needs connection pool
- name: dbuser_app
password: DBUser.App
pgbouncer: true
# Internal user: no connection pool needed
- name: dbuser_internal
password: DBUser.Internal
pgbouncer: false # Default, can be omitted
Users with pgbouncer: true are added to /etc/pgbouncer/userlist.txt.
pool_mode
Enum, mutable. User-level pool mode: transaction, session, or statement. Default transaction.
| Mode | Description | Use Case |
|---|
transaction | Return connection after txn | Most OLTP apps, default |
session | Return connection after session | Apps needing session state |
statement | Return after each statement | Simple stateless queries |
# DBA user: session mode (may need SET commands etc.)
- name: dbuser_dba
pgbouncer: true
pool_mode: session
# Normal business user: transaction mode
- name: dbuser_app
pgbouncer: true
pool_mode: transaction
User-level pool params are configured via /etc/pgbouncer/useropts.txt:
dbuser_dba = pool_mode=session max_user_connections=16
dbuser_monitor = pool_mode=session max_user_connections=8
pool_connlimit
Integer, mutable. User-level max pool connections, default -1 (unlimited).
- name: dbuser_app
pgbouncer: true
pool_connlimit: 50 # Max 50 pool connections for this user
ACL System
Pigsty provides a built-in, out-of-the-box access control / ACL system. Just assign these four default roles to business users:
| Role | Privileges | Typical Use Case |
|---|
dbrole_readwrite | Global read-write | Primary business prod accounts |
dbrole_readonly | Global read-only | Other business read-only access |
dbrole_admin | DDL privileges | Business admins, table creation |
dbrole_offline | Restricted read-only (offline only) | Individual users, ETL/analytics |
# Typical business user configuration
pg_users:
- name: dbuser_app
password: DBUser.App
pgbouncer: true
roles: [dbrole_readwrite] # Prod account, read-write
- name: dbuser_readonly
password: DBUser.Readonly
pgbouncer: true
roles: [dbrole_readonly] # Read-only account
- name: dbuser_admin
password: DBUser.Admin
pgbouncer: true
roles: [dbrole_admin] # Admin, can execute DDL
- name: dbuser_etl
password: DBUser.ETL
roles: [dbrole_offline] # Offline analytics account
To redesign your own ACL system, customize:
Pgbouncer Users
Pgbouncer is enabled by default as connection pool middleware. Pigsty adds all users in pg_users with explicit pgbouncer: true flag to the pgbouncer user list.
Users in connection pool are listed in /etc/pgbouncer/userlist.txt:
"postgres" ""
"dbuser_wiki" "SCRAM-SHA-256$4096:+77dyhrPeFDT/TptHs7/7Q==$KeatuohpKIYzHPCt/tqBu85vI11o9mar/by0hHYM2W8=:X9gig4JtjoS8Y/o1vQsIX/gY1Fns8ynTXkbWOjUfbRQ="
"dbuser_view" "SCRAM-SHA-256$4096:DFoZHU/DXsHL8MJ8regdEw==$gx9sUGgpVpdSM4o6A2R9PKAUkAsRPLhLoBDLBUYtKS0=:MujSgKe6rxcIUMv4GnyXJmV0YNbf39uFRZv724+X1FE="
"dbuser_monitor" "SCRAM-SHA-256$4096:fwU97ZMO/KR0ScHO5+UuBg==$CrNsmGrx1DkIGrtrD1Wjexb/aygzqQdirTO1oBZROPY=:L8+dJ+fqlMQh7y4PmVR/gbAOvYWOr+KINjeMZ8LlFww="
"dbuser_meta" "SCRAM-SHA-256$4096:leB2RQPcw1OIiRnPnOMUEg==$eyC+NIMKeoTxshJu314+BmbMFpCcspzI3UFZ1RYfNyU=:fJgXcykVPvOfro2MWNkl5q38oz21nSl1dTtM65uYR1Q="
User-level pool params are maintained in /etc/pgbouncer/useropts.txt:
dbuser_dba = pool_mode=session max_user_connections=16
dbuser_monitor = pool_mode=session max_user_connections=8
When creating users, Pgbouncer user list is refreshed via online reload - doesn’t affect existing connections.
Pgbouncer runs as same dbsu as PostgreSQL (default postgres OS user). Use pgb alias to access pgbouncer admin functions.
pgbouncer_auth_query param allows dynamic query for pool user auth - convenient when you prefer not to manually manage pool users.
For user management operations, see User Management.
For user access privileges, see ACL: Role Privileges.
5 - Database
How to define and customize PostgreSQL databases through configuration?
In this document, “database” refers to a logical object within a database cluster created with CREATE DATABASE.
A PostgreSQL cluster can serve multiple databases simultaneously. In Pigsty, you can define required databases in cluster configuration.
Pigsty customizes the template1 template database - creating default schemas, installing default extensions, configuring default privileges. Newly created databases inherit these settings from template1.
You can also specify other template databases via template for instant database cloning.
By default, all business databases are 1:1 added to Pgbouncer connection pool; pg_exporter auto-discovers all business databases for in-database object monitoring.
All databases are also registered as PostgreSQL datasources in Grafana on all INFRA nodes for PGCAT dashboards.
Define Database
Business databases are defined in cluster param pg_databases, an array of database definition objects.
During cluster initialization, databases are created in definition order, so later databases can use earlier ones as templates.
Example from Pigsty demo pg-meta cluster:
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_databases:
- { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public}, {name: timescaledb}]}
- { name: grafana ,owner: dbuser_grafana ,revokeconn: true ,comment: grafana primary database }
- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
- { name: kong ,owner: dbuser_kong ,revokeconn: true ,comment: kong the api gateway database }
- { name: gitea ,owner: dbuser_gitea ,revokeconn: true ,comment: gitea meta database }
- { name: wiki ,owner: dbuser_wiki ,revokeconn: true ,comment: wiki meta database }
- { name: noco ,owner: dbuser_noco ,revokeconn: true ,comment: nocodb database }
Each database definition is a complex object with fields below. Only name is required:
- name: meta # REQUIRED, `name` is the only mandatory field
state: create # Optional, database state: create (default), absent, recreate
baseline: cmdb.sql # Optional, SQL baseline file path (relative to Ansible search path, e.g., files/)
pgbouncer: true # Optional, add to pgbouncer database list? default true
schemas: [pigsty] # Optional, additional schemas to create, array of schema names
extensions: # Optional, extensions to install: array of extension objects
- { name: postgis , schema: public } # Can specify schema, or omit (installs to first schema in search_path)
- { name: timescaledb } # Some extensions create and use fixed schemas
comment: pigsty meta database # Optional, database comment/description
owner: postgres # Optional, database owner, defaults to current user
template: template1 # Optional, template to use, default template1
strategy: FILE_COPY # Optional, clone strategy: FILE_COPY or WAL_LOG (PG15+)
encoding: UTF8 # Optional, inherits from template/cluster config (UTF8)
locale: C # Optional, inherits from template/cluster config (C)
lc_collate: C # Optional, inherits from template/cluster config (C)
lc_ctype: C # Optional, inherits from template/cluster config (C)
locale_provider: libc # Optional, locale provider: libc, icu, builtin (PG15+)
icu_locale: en-US # Optional, ICU locale rules (PG15+)
icu_rules: '' # Optional, ICU collation rules (PG16+)
builtin_locale: C.UTF-8 # Optional, builtin locale provider rules (PG17+)
tablespace: pg_default # Optional, default tablespace
is_template: false # Optional, mark as template database
allowconn: true # Optional, allow connections, default true
revokeconn: false # Optional, revoke public CONNECT privilege, default false
register_datasource: true # Optional, register to grafana datasource? default true
connlimit: -1 # Optional, connection limit, -1 means unlimited
parameters: # Optional, database-level params via ALTER DATABASE SET
work_mem: '64MB'
statement_timeout: '30s'
pool_auth_user: dbuser_meta # Optional, auth user for pgbouncer auth_query
pool_mode: transaction # Optional, database-level pgbouncer pool mode
pool_size: 64 # Optional, database-level pgbouncer default pool size
pool_reserve: 32 # Optional, database-level pgbouncer reserve pool
pool_size_min: 0 # Optional, database-level pgbouncer min pool size
pool_connlimit: 100 # Optional, database-level max database connections
Since Pigsty v4.1.0, database pool fields are unified as pool_reserve and pool_connlimit; legacy aliases pool_size_reserve / pool_max_db_conn are converged.
Parameter Overview
The only required field is name - a valid, unique database name within the cluster. All other params have sensible defaults.
Parameters marked “Immutable” only take effect at creation; changing them requires database recreation.
| Field | Category | Type | Attr | Description |
|---|
name | Basic | string | Required | Database name, must be valid and unique |
state | Basic | enum | Optional | State: create (default), absent, recreate |
owner | Basic | string | Mutable | Database owner, defaults to postgres |
comment | Basic | string | Mutable | Database comment |
template | Template | string | Immutable | Template database, default template1 |
strategy | Template | enum | Immutable | Clone strategy: FILE_COPY or WAL_LOG (PG15+) |
encoding | Encoding | string | Immutable | Character encoding, default inherited (UTF8) |
locale | Encoding | string | Immutable | Locale setting, default inherited (C) |
lc_collate | Encoding | string | Immutable | Collation rule, default inherited (C) |
lc_ctype | Encoding | string | Immutable | Character classification, default inherited (C) |
locale_provider | Encoding | enum | Immutable | Locale provider: libc, icu, builtin (PG15+) |
icu_locale | Encoding | string | Immutable | ICU locale rules (PG15+) |
icu_rules | Encoding | string | Immutable | ICU collation customization (PG16+) |
builtin_locale | Encoding | string | Immutable | Builtin locale rules (PG17+) |
tablespace | Storage | string | Mutable | Default tablespace, change triggers data migration |
is_template | Privilege | bool | Mutable | Mark as template database |
allowconn | Privilege | bool | Mutable | Allow connections, default true |
revokeconn | Privilege | bool | Mutable | Revoke PUBLIC CONNECT privilege |
connlimit | Privilege | int | Mutable | Connection limit, -1 for unlimited |
baseline | Init | string | Mutable | SQL baseline file path, runs only on first create |
schemas | Init | (string|object)[] | Mutable | Schema definitions to create |
extensions | Init | (string|object)[] | Mutable | Extension definitions to install |
parameters | Init | object | Mutable | Database-level parameters |
pgbouncer | Pool | bool | Mutable | Add to connection pool, default true |
pool_mode | Pool | enum | Mutable | Pool mode: transaction (default) |
pool_size | Pool | int | Mutable | Default pool size, default 64 |
pool_size_min | Pool | int | Mutable | Min pool size, default 0 |
pool_reserve | Pool | int | Mutable | Reserve pool size, default 32 |
pool_connlimit | Pool | int | Mutable | Max database connections, default 100 |
pool_auth_user | Pool | string | Mutable | Auth query user |
register_datasource | Monitor | bool | Mutable | Register to Grafana datasource, default true |
Parameter Details
name
String, required. Database name - must be unique within the cluster.
Must be a valid PostgreSQL identifier: max 63 chars, no SQL keywords, starts with letter or underscore, followed by letters, digits, or underscores. Must match: ^[A-Za-z_][A-Za-z0-9_$]{0,62}$
- name: myapp # Simple naming
- name: my_application # Underscore separated
- name: app_v2 # Version included
state
Enum for database operation: create, absent, or recreate. Default create.
| State | Description |
|---|
create | Default, create or modify database, adjust mutable params if exists |
absent | Delete database with DROP DATABASE WITH (FORCE) |
recreate | Drop then create, for database reset |
- name: myapp # state defaults to create
- name: olddb
state: absent # Delete database
- name: testdb
state: recreate # Rebuild database
owner
String. Database owner, defaults to pg_dbsu (postgres) if not specified.
Target user must exist. Changing owner executes (old owner retains existing privileges):
Database owner has full control including creating schemas, tables, extensions - useful for multi-tenant scenarios.
ALTER DATABASE "myapp" OWNER TO "new_owner";
GRANT ALL PRIVILEGES ON DATABASE "myapp" TO "new_owner";
String. Database comment, defaults to business database {name}.
Set via COMMENT ON DATABASE, supports Chinese and special chars (Pigsty auto-escapes quotes). Stored in pg_database.datacl, viewable via \l+.
COMMENT ON DATABASE "myapp" IS 'my main application database';
- name: myapp
comment: my main application database
template
String, immutable. Template database for creation, default template1.
PostgreSQL’s CREATE DATABASE clones the template - new database inherits all objects, extensions, schemas, permissions. Pigsty customizes template1 during cluster init, so new databases inherit these settings.
| Template | Description |
|---|
template1 | Default, includes Pigsty pre-configured extensions/schemas/perms |
template0 | Clean template, required for non-default locale providers |
| Custom database | Use existing database as template for cloning |
When using icu or builtin locale provider, must specify template: template0 since template1 locale settings can’t be overridden.
- name: myapp_icu
template: template0 # Required for ICU
locale_provider: icu
icu_locale: zh-Hans
Using template0 skips monitoring extensions/schemas and default privileges - allowing fully custom database.
strategy
Enum, immutable. Clone strategy: FILE_COPY or WAL_LOG. Available PG15+.
| Strategy | Description | Use Case |
|---|
FILE_COPY | Direct file copy, PG15+ default | Large templates, general |
WAL_LOG | Clone via WAL logging | Small templates, non-blocking |
WAL_LOG doesn’t block template connections during clone but less efficient for large templates. Ignored on PG14 and earlier.
- name: cloned_db
template: source_db
strategy: WAL_LOG # WAL-based cloning
encoding
String, immutable. Character encoding, inherits from template if unspecified (usually UTF8).
Strongly recommend UTF8 unless special requirements. Cannot be changed after creation.
- name: legacy_db
template: template0 # Use template0 for non-default encoding
encoding: LATIN1
locale
String, immutable. Locale setting - sets both lc_collate and lc_ctype. Inherits from template (usually C).
Determines string sort order and character classification. Use C or POSIX for best performance and cross-platform consistency; use language-specific locales (e.g., zh_CN.UTF-8) for proper language sorting.
- name: chinese_db
template: template0
locale: zh_CN.UTF-8 # Chinese locale
encoding: UTF8
lc_collate
String, immutable. String collation rule. Inherits from template (usually C).
Determines ORDER BY and comparison results. Common values: C (byte order, fastest), C.UTF-8, en_US.UTF-8, zh_CN.UTF-8. Cannot be changed after creation.
- name: myapp
template: template0
lc_collate: en_US.UTF-8 # English collation
lc_ctype: en_US.UTF-8
lc_ctype
String, immutable. Character classification rule for upper/lower case, digits, letters. Inherits from template (usually C).
Affects upper(), lower(), regex \w, etc. Cannot be changed after creation.
locale_provider
Enum, immutable. Locale implementation provider: libc, icu, or builtin. Available PG15+, default libc.
| Provider | Version | Description |
|---|
libc | - | OS C library, traditional default, varies by system |
icu | PG15+ | ICU library, cross-platform consistent, more langs |
builtin | PG17+ | PostgreSQL builtin, most efficient, C/C.UTF-8 only |
Using icu or builtin requires template: template0 with corresponding icu_locale or builtin_locale.
- name: fast_db
template: template0
locale_provider: builtin # Builtin provider, most efficient
builtin_locale: C.UTF-8
icu_locale
String, immutable. ICU locale identifier. Available PG15+ when locale_provider: icu.
ICU identifiers follow BCP 47. Common values:
| Value | Description |
|---|
en-US | US English |
en-GB | British English |
zh-Hans | Simplified Chinese |
zh-Hant | Traditional Chinese |
ja-JP | Japanese |
ko-KR | Korean |
- name: chinese_app
template: template0
locale_provider: icu
icu_locale: zh-Hans # Simplified Chinese ICU collation
encoding: UTF8
icu_rules
String, immutable. Custom ICU collation rules. Available PG16+.
Allows fine-tuning default sort behavior using ICU Collation Customization.
- name: custom_sort_db
template: template0
locale_provider: icu
icu_locale: en-US
icu_rules: '&V << w <<< W' # Custom V/W sort order
builtin_locale
String, immutable. Builtin locale provider rules. Available PG17+ when locale_provider: builtin. Values: C or C.UTF-8.
builtin provider is PG17’s new builtin implementation - faster than libc with consistent cross-platform behavior. Suitable for C/C.UTF-8 collation only.
- name: fast_db
template: template0
locale_provider: builtin
builtin_locale: C.UTF-8 # Builtin UTF-8 support
encoding: UTF8
tablespace
String, mutable. Default tablespace, default pg_default.
Changing tablespace triggers physical data migration - PostgreSQL moves all objects to new tablespace. Can take long time for large databases, use cautiously.
- name: archive_db
tablespace: slow_hdd # Archive data on slow storage
ALTER DATABASE "archive_db" SET TABLESPACE "slow_hdd";
is_template
Boolean, mutable. Mark database as template, default false.
When true, any user with CREATEDB privilege can use this database as template for cloning. Template databases typically pre-install standard schemas, extensions, and data.
- name: app_template
is_template: true # Mark as template, allow user cloning
schemas: [core, api]
extensions: [postgis, pg_trgm]
Deleting is_template: true databases: Pigsty first executes ALTER DATABASE ... IS_TEMPLATE false then drops.
allowconn
Boolean, mutable. Allow connections, default true.
Setting false completely disables connections at database level - no user (including superuser) can connect. Used for maintenance or archival purposes.
- name: archive_db
allowconn: false # Disallow all connections
ALTER DATABASE "archive_db" ALLOW_CONNECTIONS false;
revokeconn
Boolean, mutable. Revoke PUBLIC CONNECT privilege, default false.
When true, Pigsty executes:
- Revoke PUBLIC CONNECT, regular users can’t connect
- Grant connect to replication user (
replicator) and monitor user (dbuser_monitor) - Grant connect to admin user (
dbuser_dba) and owner with WITH GRANT OPTION
Setting false restores PUBLIC CONNECT privilege.
- name: secure_db
owner: dbuser_secure
revokeconn: true # Revoke public connect, only specified users
connlimit
Integer, mutable. Max concurrent connections, default -1 (unlimited).
Positive integer limits max simultaneous sessions. Doesn’t affect superusers.
- name: limited_db
connlimit: 50 # Max 50 concurrent connections
ALTER DATABASE "limited_db" CONNECTION LIMIT 50;
baseline
String, one-time. SQL baseline file path executed after database creation.
Baseline files typically contain schema definitions, initial data, stored procedures. Path is relative to Ansible search path, usually in files/.
Baseline runs only on first creation; skipped if database exists. state: recreate re-runs baseline.
- name: myapp
baseline: myapp_schema.sql # Looks for files/myapp_schema.sql
schemas
Array, mutable (add/remove). Schema definitions to create or drop. Elements can be strings or objects.
Simple format - strings for schema names (create only):
schemas:
- app
- api
- core
Full format - objects for owner and drop operations:
schemas:
- name: app # Schema name (required)
owner: dbuser_app # Schema owner (optional), generates AUTHORIZATION clause
- name: deprecated
state: absent # Drop schema (CASCADE)
Create uses IF NOT EXISTS; drop uses CASCADE (deletes all objects in schema).
CREATE SCHEMA IF NOT EXISTS "app" AUTHORIZATION "dbuser_app";
DROP SCHEMA IF EXISTS "deprecated" CASCADE;
extensions
Array, mutable (add/remove). Extension definitions to install or uninstall. Elements can be strings or objects.
Simple format - strings for extension names (install only):
extensions:
- postgis
- pg_trgm
- vector
Full format - objects for schema, version, and uninstall:
extensions:
- name: vector # Extension name (required)
schema: public # Install to schema (optional)
version: '0.5.1' # Specific version (optional)
- name: old_extension
state: absent # Uninstall extension (CASCADE)
Install uses CASCADE to auto-install dependencies; uninstall uses CASCADE (deletes dependent objects).
CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public" VERSION '0.5.1' CASCADE;
DROP EXTENSION IF EXISTS "old_extension" CASCADE;
parameters
Object, mutable. Database-level config params via ALTER DATABASE ... SET. Applies to all sessions connecting to this database.
- name: analytics
parameters:
work_mem: '256MB'
maintenance_work_mem: '512MB'
statement_timeout: '5min'
search_path: 'analytics,public'
Use special value DEFAULT (case-insensitive) to reset to PostgreSQL default:
parameters:
work_mem: DEFAULT # Reset to default
statement_timeout: '30s' # Set new value
ALTER DATABASE "myapp" SET "work_mem" = DEFAULT;
ALTER DATABASE "myapp" SET "statement_timeout" = '30s';
pgbouncer
Boolean, mutable. Add database to Pgbouncer pool list, default true.
Setting false excludes database from Pgbouncer - clients can’t access via connection pool. For internal management databases or direct-connect scenarios.
- name: internal_db
pgbouncer: false # No connection pool access
pool_mode
Enum, mutable. Pgbouncer pool mode: transaction, session, or statement. Default transaction.
| Mode | Description | Use Case |
|---|
transaction | Return connection after txn | Most OLTP apps, default |
session | Return connection after session | Apps needing session state |
statement | Return after each statement | Simple stateless queries |
- name: session_app
pool_mode: session # Session-level pooling
pool_size
Integer, mutable. Pgbouncer default pool size, default 64.
Pool size determines backend connections reserved for this database. Adjust based on workload.
- name: high_load_db
pool_size: 128 # Larger pool for high load
pool_size_min
Integer, mutable. Pgbouncer minimum pool size, default 0.
Values > 0 pre-create specified backend connections for connection warming, reducing first-request latency.
- name: latency_sensitive
pool_size_min: 10 # Pre-warm 10 connections
pool_reserve
Integer, mutable. Pgbouncer reserve pool size, default 32.
When default pool exhausted, Pgbouncer can allocate up to pool_reserve additional connections for burst traffic.
- name: bursty_db
pool_size: 64
pool_reserve: 64 # Allow burst to 128 connections
pool_connlimit
Integer, mutable. Max connections via Pgbouncer pool, default 100.
This is Pgbouncer-level limit, independent of database’s connlimit param.
- name: limited_pool_db
pool_connlimit: 50 # Pool max 50 connections
pool_auth_user
String, mutable. User for Pgbouncer auth query.
Requires pgbouncer_auth_query enabled. When set, all Pgbouncer connections to this database use specified user for auth query password verification.
- name: myapp
pool_auth_user: dbuser_monitor # Use monitor user for auth query
register_datasource
Boolean, mutable. Register database to Grafana as PostgreSQL datasource, default true.
Set false to skip Grafana registration. For temp databases, test databases, or internal databases not needed in monitoring.
- name: temp_db
register_datasource: false # Don't register to Grafana
Template Inheritance
Many parameters inherit from template database if not explicitly specified. Default template is template1, whose encoding settings are determined by cluster init params:
New databases fork from template1, which is customized during PG_PROVISION with extensions, schemas, and default privileges. Unless you explicitly use another template.
Deep Customization
Pigsty provides rich customization params. To customize template database, refer to:
If above configurations don’t meet your needs, use pg_init to specify custom cluster init scripts:
Locale Providers
PostgreSQL 15+ introduced locale_provider for different locale implementations. These are immutable after creation.
Pigsty’s configure wizard selects builtin C.UTF-8/C locale provider based on PG and OS versions.
Databases inherit cluster locale by default. To specify different locale provider, you must use template0.
Using ICU provider (PG15+):
- name: myapp_icu
template: template0 # ICU requires template0
locale_provider: icu
icu_locale: en-US # ICU locale rules
encoding: UTF8
Using builtin provider (PG17+):
- name: myapp_builtin
template: template0
locale_provider: builtin
builtin_locale: C.UTF-8 # Builtin locale rules
encoding: UTF8
Provider comparison: libc (traditional, OS-dependent), icu (PG15+, cross-platform, feature-rich), builtin (PG17+, most efficient C/C.UTF-8).
Connection Pool
Pgbouncer connection pool optimizes short-connection performance, reduces contention, prevents excessive connections from overwhelming database, and provides flexibility during migrations.
Pigsty configures 1:1 connection pool for each PostgreSQL instance, running as same pg_dbsu (default postgres OS user). Pool communicates with database via /var/run/postgresql Unix socket.
Pigsty adds all databases in pg_databases to pgbouncer by default.
Set pgbouncer: false to exclude specific databases.
Pgbouncer database list and config params are defined in /etc/pgbouncer/database.txt:
meta = host=/var/run/postgresql mode=session
grafana = host=/var/run/postgresql mode=transaction
bytebase = host=/var/run/postgresql auth_user=dbuser_meta
kong = host=/var/run/postgresql pool_size=32 reserve_pool=64
gitea = host=/var/run/postgresql min_pool_size=10
wiki = host=/var/run/postgresql
noco = host=/var/run/postgresql
mongo = host=/var/run/postgresql
When creating databases, Pgbouncer database list is refreshed via online reload - doesn’t affect existing connections.
6 - HBA Rules
Detailed explanation of PostgreSQL and Pgbouncer Host-Based Authentication (HBA) rules configuration in Pigsty.
Overview
HBA (Host-Based Authentication) controls “who can connect to the database from where and how”.
Pigsty manages HBA rules declaratively through pg_default_hba_rules and pg_hba_rules.
Pigsty renders the following config files during cluster init or HBA refresh:
| Config File | Path | Description |
|---|
| PostgreSQL HBA | /pg/data/pg_hba.conf | PostgreSQL server HBA rules |
| Pgbouncer HBA | /etc/pgbouncer/pgb_hba.conf | Connection pool HBA rules |
HBA rules are controlled by these parameters:
Rule features:
- Role filtering: Rules support
role field, auto-filter based on instance’s pg_role - Order sorting: Rules support
order field, controls position in final config file - Two syntaxes: Supports alias form (simplified) and raw form (direct HBA text)
Refresh HBA
After modifying config, re-render config files and reload services:
bin/pgsql-hba <cls> # Refresh entire cluster HBA (recommended)
bin/pgsql-hba <cls> <ip>... # Refresh specific instances in cluster
Script executes the following playbook:
./pgsql.yml -l <cls> -t pg_hba,pg_reload,pgbouncer_hba,pgbouncer_reload -e pg_reload=true
PostgreSQL only: ./pgsql.yml -l <cls> -t pg_hba,pg_reload -e pg_reload=true
Pgbouncer only: ./pgsql.yml -l <cls> -t pgbouncer_hba,pgbouncer_reload
Don't edit config files directly
Don’t directly edit /pg/data/pg_hba.conf or /etc/pgbouncer/pgb_hba.conf - they’ll be overwritten on next playbook run.
All changes should be made in pigsty.yml, then execute bin/pgsql-hba to refresh.
Parameter Details
pg_default_hba_rules
PostgreSQL global default HBA rule list, usually defined in all.vars, provides base access control for all clusters.
- Type:
rule[], Level: Global (G)
pg_default_hba_rules:
- {user: '${dbsu}' ,db: all ,addr: local ,auth: ident ,title: 'dbsu access via local os user ident' ,order: 100}
- {user: '${dbsu}' ,db: replication ,addr: local ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
- {user: '${repl}' ,db: replication ,addr: localhost ,auth: pwd ,title: 'replicator replication from localhost',order: 200}
- {user: '${repl}' ,db: replication ,addr: intra ,auth: pwd ,title: 'replicator replication from intranet' ,order: 250}
- {user: '${repl}' ,db: postgres ,addr: intra ,auth: pwd ,title: 'replicator postgres db from intranet' ,order: 300}
- {user: '${monitor}' ,db: all ,addr: localhost ,auth: pwd ,title: 'monitor from localhost with password' ,order: 350}
- {user: '${monitor}' ,db: all ,addr: infra ,auth: pwd ,title: 'monitor from infra host with password',order: 400}
- {user: '${admin}' ,db: all ,addr: infra ,auth: ssl ,title: 'admin @ infra nodes with pwd & ssl' ,order: 450}
- {user: '${admin}' ,db: all ,addr: world ,auth: ssl ,title: 'admin @ everywhere with ssl & pwd' ,order: 500}
- {user: '+dbrole_readonly',db: all ,addr: localhost ,auth: pwd ,title: 'pgbouncer read/write via local socket',order: 550}
- {user: '+dbrole_readonly',db: all ,addr: intra ,auth: pwd ,title: 'read/write biz user via password' ,order: 600}
- {user: '+dbrole_offline' ,db: all ,addr: intra ,auth: pwd ,title: 'allow etl offline tasks from intranet',order: 650}
pg_hba_rules
PostgreSQL cluster/instance-level additional HBA rules, can override at cluster or instance level, merged with default rules and sorted by order.
- Type:
rule[], Level: Global/Cluster/Instance (G/C/I), Default: []
pg_hba_rules:
- {user: app_user, db: app_db, addr: intra, auth: pwd, title: 'app user access'}
pgb_default_hba_rules
Pgbouncer global default HBA rule list, usually defined in all.vars.
- Type:
rule[], Level: Global (G)
pgb_default_hba_rules:
- {user: '${dbsu}' ,db: pgbouncer ,addr: local ,auth: peer ,title: 'dbsu local admin access with os ident',order: 100}
- {user: 'all' ,db: all ,addr: localhost ,auth: pwd ,title: 'allow all user local access with pwd' ,order: 150}
- {user: '${monitor}' ,db: pgbouncer ,addr: intra ,auth: pwd ,title: 'monitor access via intranet with pwd' ,order: 200}
- {user: '${monitor}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other monitor access addr' ,order: 250}
- {user: '${admin}' ,db: all ,addr: intra ,auth: pwd ,title: 'admin access via intranet with pwd' ,order: 300}
- {user: '${admin}' ,db: all ,addr: world ,auth: deny ,title: 'reject all other admin access addr' ,order: 350}
- {user: 'all' ,db: all ,addr: intra ,auth: pwd ,title: 'allow all user intra access with pwd' ,order: 400}
pgb_hba_rules
Pgbouncer cluster/instance-level additional HBA rules.
- Type:
rule[], Level: Global/Cluster/Instance (G/C/I), Default: []
Note: Pgbouncer HBA does not support db: replication.
Rule Fields
Each HBA rule is a YAML dict supporting these fields:
| Field | Type | Required | Default | Description |
|---|
user | string | No | all | Username, supports all, placeholders, +rolename |
db | string | No | all | Database name, supports all, replication, db name |
addr | string | Yes* | - | Address alias or CIDR, see Address Aliases |
auth | string | No | pwd | Auth method alias, see Auth Methods |
title | string | No | - | Rule description, rendered as comment in config |
role | string | No | common | Instance role filter, see Role Filtering |
order | int | No | 1000 | Sort weight, lower first, see Order Sorting |
rules | list | Yes* | - | Raw HBA text lines, mutually exclusive with addr |
Either addr or rules must be specified. Use rules to write raw HBA format directly.
Address Aliases
Pigsty provides address aliases to simplify HBA rule writing:
| Alias | Expands To | Description |
|---|
local | Unix socket | Local Unix socket |
localhost | Unix socket + 127.0.0.1/32 + ::1/128 | Loopback addresses |
admin | ${admin_ip}/32 | Admin IP address |
infra | All infra group node IPs | Infrastructure nodes |
cluster | All current cluster member IPs | Same cluster instances |
intra / intranet | 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 | Intranet CIDRs |
world / all | 0.0.0.0/0 + ::/0 | Any address (IPv4 + IPv6) |
<CIDR> | Direct use | e.g., 192.168.1.0/24 |
Intranet CIDRs can be customized via node_firewall_intranet:
node_firewall_intranet:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
Auth Methods
Pigsty provides auth method aliases for simplified config:
| Alias | Actual Method | Connection Type | Description |
|---|
pwd | scram-sha-256 or md5 | host | Auto-select based on pg_pwd_enc |
ssl | scram-sha-256 or md5 | hostssl | Force SSL + password |
ssl-sha | scram-sha-256 | hostssl | Force SSL + SCRAM-SHA-256 |
ssl-md5 | md5 | hostssl | Force SSL + MD5 |
cert | cert | hostssl | Client certificate auth |
trust | trust | host | Unconditional trust (dangerous) |
deny / reject | reject | host | Reject connection |
ident | ident | host | OS user mapping (PostgreSQL) |
peer | peer | local | OS user mapping (Pgbouncer/local) |
pg_pwd_enc defaults to scram-sha-256, can be set to md5 for legacy client compatibility.
User Variables
HBA rules support these user placeholders, auto-replaced with actual usernames during rendering:
| Placeholder | Default | Corresponding Param |
|---|
${dbsu} | postgres | pg_dbsu |
${repl} | replicator | pg_replication_username |
${monitor} | dbuser_monitor | pg_monitor_username |
${admin} | dbuser_dba | pg_admin_username |
Role Filtering
The role field in HBA rules controls which instances the rule applies to:
| Role | Description |
|---|
common | Default, applies to all instances |
primary | Primary instance only |
replica | Replica instance only |
offline | Offline instance only (pg_role: offline or pg_offline_query: true) |
standby | Standby instance |
delayed | Delayed replica instance |
Role filtering matches based on instance’s pg_role variable. Non-matching rules are commented out (prefixed with #).
pg_hba_rules:
# Only applies on primary: writer can only connect to primary
- {user: writer, db: all, addr: intra, auth: pwd, role: primary, title: 'writer only on primary'}
# Only applies on offline instances: ETL dedicated network
- {user: '+dbrole_offline', db: all, addr: '172.20.0.0/16', auth: ssl, role: offline, title: 'offline dedicated'}
Order Sorting
PostgreSQL HBA is first-match-wins, rule order is critical. Pigsty controls rule rendering order via the order field.
Order Interval Convention
| Interval | Usage |
|---|
0 - 99 | User high-priority rules (before all defaults) |
100 - 650 | Default rule zone (spaced by 50 for insertion) |
1000+ | User rule default (rules without order) |
PostgreSQL Default Rules Order
| Order | Rule Description |
|---|
| 100 | dbsu local ident |
| 150 | dbsu replication local |
| 200 | replicator localhost |
| 250 | replicator intra replication |
| 300 | replicator intra postgres |
| 350 | monitor localhost |
| 400 | monitor infra |
| 450 | admin infra ssl |
| 500 | admin world ssl |
| 550 | dbrole_readonly localhost |
| 600 | dbrole_readonly intra |
| 650 | dbrole_offline intra |
Pgbouncer Default Rules Order
| Order | Rule Description |
|---|
| 100 | dbsu local peer |
| 150 | all localhost pwd |
| 200 | monitor pgbouncer intra |
| 250 | monitor world deny |
| 300 | admin intra pwd |
| 350 | admin world deny |
| 400 | all intra pwd |
Syntax Examples
Alias Form: Using Pigsty simplified syntax
pg_hba_rules:
- title: allow grafana view access
role: primary
user: dbuser_view
db: meta
addr: infra
auth: ssl
Rendered result:
# allow grafana view access [primary]
hostssl meta dbuser_view 10.10.10.10/32 scram-sha-256
Raw Form: Using PostgreSQL HBA syntax directly
pg_hba_rules:
- title: allow intranet password access
role: common
rules:
- host all all 10.0.0.0/8 scram-sha-256
- host all all 172.16.0.0/12 scram-sha-256
- host all all 192.168.0.0/16 scram-sha-256
Rendered result:
# allow intranet password access [common]
host all all 10.0.0.0/8 scram-sha-256
host all all 172.16.0.0/12 scram-sha-256
host all all 192.168.0.0/16 scram-sha-256
Common Scenarios
Blacklist IP: Use order: 0 to ensure first match
pg_hba_rules:
- {user: all, db: all, addr: '10.1.1.100/32', auth: deny, order: 0, title: 'block bad ip'}
Whitelist App Server: High priority for specific IP
pg_hba_rules:
- {user: app_user, db: app_db, addr: '192.168.1.10/32', auth: ssl, order: 50, title: 'app server'}
Admin Force Certificate: Override default SSL password auth
pg_hba_rules:
- {user: '${admin}', db: all, addr: world, auth: cert, order: 10, title: 'admin cert only'}
Offline Instance Dedicated Network: Only on offline instances
pg_hba_rules:
- {user: '+dbrole_offline', db: all, addr: '172.20.0.0/16', auth: ssl-sha, role: offline, title: 'etl network'}
Restrict Access by Database: Sensitive databases limited to specific networks
pg_hba_rules:
- {user: fin_user, db: finance_db, addr: '10.20.0.0/16', auth: ssl, title: 'finance only'}
- {user: hr_user, db: hr_db, addr: '10.30.0.0/16', auth: ssl, title: 'hr only'}
Pgbouncer Dedicated Rules: Note no db: replication support
pgb_hba_rules:
- {user: '+dbrole_readwrite', db: all, addr: world, auth: ssl, title: 'app via pgbouncer'}
Complete Cluster Example
pg-prod:
hosts:
10.10.10.11: {pg_seq: 1, pg_role: primary}
10.10.10.12: {pg_seq: 2, pg_role: replica}
10.10.10.13: {pg_seq: 3, pg_role: offline}
vars:
pg_cluster: pg-prod
pg_hba_rules:
# Blacklist: known malicious IP (highest priority)
- {user: all, db: all, addr: '10.1.1.100/32', auth: deny, order: 0, title: 'blacklist'}
# App server whitelist (high priority)
- {user: app_user, db: app_db, addr: '192.168.1.0/24', auth: ssl, order: 50, title: 'app servers'}
# ETL tasks: offline instances only
- {user: etl_user, db: all, addr: '172.20.0.0/16', auth: pwd, role: offline, title: 'etl tasks'}
# Cluster internal monitoring
- {user: '${monitor}', db: all, addr: cluster, auth: pwd, order: 380, title: 'cluster monitor'}
pgb_hba_rules:
# App via connection pool
- {user: '+dbrole_readwrite', db: all, addr: '192.168.1.0/24', auth: ssl, title: 'app via pgbouncer'}
Verification & Troubleshooting
View Current HBA Rules
psql -c "TABLE pg_hba_file_rules" # View via SQL (recommended)
cat /pg/data/pg_hba.conf # View PostgreSQL HBA file
cat /etc/pgbouncer/pgb_hba.conf # View Pgbouncer HBA file
grep '^#' /pg/data/pg_hba.conf | head -20 # View rule titles (verify order)
Test Connection Auth
psql -h <host> -p 5432 -U <user> -d <db> -c "SELECT 1"
Common Issues
| Error Message | Possible Cause | Solution |
|---|
no pg_hba.conf entry for host... | No matching HBA rule | Add corresponding rule and refresh |
password authentication failed | Wrong password or enc | Check password and pg_pwd_enc |
| Rule not taking effect | Not refreshed or order | Run bin/pgsql-hba, check order |
Important Notes
- Order sensitive: PostgreSQL HBA is first-match-wins, use
order wisely - Role matching: Ensure
role field matches target instance’s pg_role - Address format: CIDR must be correct, e.g.,
10.0.0.0/8 not 10.0.0.0/255.0.0.0 - Pgbouncer limitation: Does not support
db: replication - SSL prerequisite: Ensure SSL is configured before using
ssl, cert auth - Test first: Validate in test environment before modifying HBA
- Refresh on scale: Rules using
addr: cluster need refresh after cluster membership changes
7 - Access Control
Default role system and privilege model provided by Pigsty
Access control is determined by the combination of “role system + privilege templates + HBA”. This section focuses on how to declare roles and object privileges through configuration parameters.
Pigsty provides a streamlined ACL model, fully described by the following parameters:
pg_default_roles: System roles and system users.pg_users: Business users and roles.pg_default_privileges: Default privileges for objects created by administrators/owners.pg_revoke_public, pg_default_schemas, pg_default_extensions: Control the default behavior of template1.
After understanding these parameters, you can write fully reproducible privilege configurations.
Default Role System (pg_default_roles)
By default, it includes 4 business roles + 4 system users:
| Name | Type | Description |
|---|
dbrole_readonly | NOLOGIN | Shared by all business, has SELECT/USAGE |
dbrole_readwrite | NOLOGIN | Inherits read-only role, with INSERT/UPDATE/DELETE |
dbrole_admin | NOLOGIN | Inherits pg_monitor + read-write role, can create objects and triggers |
dbrole_offline | NOLOGIN | Restricted read-only role, only allowed to access offline instances |
postgres | User | System superuser, same as pg_dbsu |
replicator | User | Used for streaming replication and backup, inherits monitoring and read-only privileges |
dbuser_dba | User | Primary admin account, also synced to pgbouncer |
dbuser_monitor | User | Monitoring account, has pg_monitor privilege, records slow SQL by default |
These definitions are in pg_default_roles. They can theoretically be customized, but if you replace names, you must synchronize updates in HBA/ACL/script references.
Example: Add an additional dbrole_etl for offline tasks:
pg_default_roles:
- { name: dbrole_etl, login: false, roles: [dbrole_offline], comment: 'etl read-only role' }
- { name: dbrole_admin, login: false, roles: [pg_monitor, dbrole_readwrite, dbrole_etl] }
Effect: All users inheriting dbrole_admin automatically have dbrole_etl privileges, can access offline instances and execute ETL.
Default Users and Credential Parameters
System user usernames/passwords are controlled by the following parameters:
| Parameter | Default Value | Purpose |
|---|
pg_dbsu | postgres | Database/system superuser |
pg_dbsu_password | Empty string | dbsu password (disabled by default) |
pg_replication_username | replicator | Replication username |
pg_replication_password | DBUser.Replicator | Replication user password |
pg_admin_username | dbuser_dba | Admin username |
pg_admin_password | DBUser.DBA | Admin password |
pg_monitor_username | dbuser_monitor | Monitoring user |
pg_monitor_password | DBUser.Monitor | Monitoring user password |
If you modify these parameters, please synchronize updates to the corresponding user definitions in pg_default_roles to avoid role attribute inconsistencies.
Business Roles and Authorization (pg_users)
Business users are declared through pg_users (see User Configuration for detailed fields), where the roles field controls the granted business roles.
Example: Create one read-only and one read-write user:
pg_users:
- { name: app_reader, password: DBUser.Reader, roles: [dbrole_readonly], pgbouncer: true }
- { name: app_writer, password: DBUser.Writer, roles: [dbrole_readwrite], pgbouncer: true }
By inheriting dbrole_* to control access privileges, no need to GRANT for each database separately. Combined with pg_hba_rules, you can distinguish access sources.
For finer-grained ACL, you can use standard GRANT/REVOKE in baseline SQL or subsequent playbooks. Pigsty won’t prevent you from granting additional privileges.
Default Privilege Templates (pg_default_privileges)
pg_default_privileges will set DEFAULT PRIVILEGE on postgres, dbuser_dba, dbrole_admin (after business admin SET ROLE). The default template is as follows:
pg_default_privileges:
- GRANT USAGE ON SCHEMAS TO dbrole_readonly
- GRANT SELECT ON TABLES TO dbrole_readonly
- GRANT SELECT ON SEQUENCES TO dbrole_readonly
- GRANT EXECUTE ON FUNCTIONS TO dbrole_readonly
- GRANT USAGE ON SCHEMAS TO dbrole_offline
- GRANT SELECT ON TABLES TO dbrole_offline
- GRANT SELECT ON SEQUENCES TO dbrole_offline
- GRANT EXECUTE ON FUNCTIONS TO dbrole_offline
- GRANT INSERT ON TABLES TO dbrole_readwrite
- GRANT UPDATE ON TABLES TO dbrole_readwrite
- GRANT DELETE ON TABLES TO dbrole_readwrite
- GRANT USAGE ON SEQUENCES TO dbrole_readwrite
- GRANT UPDATE ON SEQUENCES TO dbrole_readwrite
- GRANT TRUNCATE ON TABLES TO dbrole_admin
- GRANT REFERENCES ON TABLES TO dbrole_admin
- GRANT TRIGGER ON TABLES TO dbrole_admin
- GRANT CREATE ON SCHEMAS TO dbrole_admin
As long as objects are created by the above administrators, they will automatically carry the corresponding privileges without manual GRANT. If business needs a custom template, simply replace this array.
Additional notes:
pg_revoke_public defaults to true, meaning automatic revocation of PUBLIC’s CREATE privilege on databases and the public schema.pg_default_schemas and pg_default_extensions control pre-created schemas/extensions in template1/postgres, typically used for monitoring objects (monitor schema, pg_stat_statements, etc.).
Common Configuration Scenarios
Provide Read-Only Account for Partners
pg_users:
- name: partner_ro
password: Partner.Read
roles: [dbrole_readonly]
pg_hba_rules:
- { user: partner_ro, db: analytics, addr: 203.0.113.0/24, auth: ssl }
Effect: Partner account only has default read-only privileges after login, and can only access the analytics database via TLS from the specified network segment.
Grant DDL Capability to Business Administrators
pg_users:
- name: app_admin
password: DBUser.AppAdmin
roles: [dbrole_admin]
Business administrators can inherit the default DDL privilege template by SET ROLE dbrole_admin or logging in directly as app_admin.
Customize Default Privileges
pg_default_privileges:
- GRANT INSERT,UPDATE,DELETE ON TABLES TO dbrole_admin
- GRANT SELECT,UPDATE ON SEQUENCES TO dbrole_admin
- GRANT SELECT ON TABLES TO reporting_group
After replacing the default template, all objects created by administrators will carry the new privilege definitions, avoiding per-object authorization.
Coordination with Other Components
- HBA Rules: Use
pg_hba_rules to bind roles with sources (e.g., only allow dbrole_offline to access offline instances). - Pgbouncer: Users with
pgbouncer: true will be written to userlist.txt, and pool_mode/pool_connlimit can control connection pool-level quotas. - Grafana/Monitoring:
dbuser_monitor’s privileges come from pg_default_roles. If you add a new monitoring user, remember to grant pg_monitor + access to the monitor schema.
Through these parameters, you can version the privilege system along with code, truly achieving “configuration as policy”.
8 - Parameters
Configure PostgreSQL parameters at cluster, instance, database, and user levels
PostgreSQL parameters can be configured at multiple levels with different scopes and precedence.
Pigsty supports four configuration levels, from global to local:
| Level | Scope | Configuration Method | Storage Location |
|---|
| Cluster | All instances in cluster | Patroni DCS / Tuning Templates | etcd + postgresql.conf |
| Instance | Single PG instance | pg_parameters / ALTER SYSTEM | postgresql.auto.conf |
| Database | All sessions in a DB | pg_databases[].parameters | pg_db_role_setting |
| User | All sessions of a user | pg_users[].parameters | pg_db_role_setting |
Priority from low to high: Cluster < Instance < Database < User < Session (SET command).
Higher priority settings override lower ones.
For complete PostgreSQL parameter documentation, see PostgreSQL Docs: Server Configuration.
Cluster Level
Cluster-level parameters are shared across all instances (primary and replicas) in a PostgreSQL cluster.
In Pigsty, cluster parameters are managed via Patroni and stored in DCS (etcd by default).
Pigsty provides four pre-configured Patroni tuning templates optimized for different workloads, specified via pg_conf:
| Template | Use Case | Characteristics |
|---|
oltp.yml | OLTP transactions | Low latency, high concurrency (default) |
olap.yml | OLAP analytics | Large queries, high throughput |
crit.yml | Critical/Financial | Max durability, safety over perf |
tiny.yml | Tiny instances | Resource-constrained, dev/test |
Template files are located in roles/pgsql/templates/ and contain auto-calculated values based on hardware specs.
Templates are rendered to /etc/patroni/patroni.yml during cluster initialization. See Tuning Templates for details.
Before cluster creation, you can adjust these templates to modify initial parameters.
Once initialized, parameter changes should be made via Patroni’s configuration management.
Patroni DCS Config
Patroni stores cluster config in DCS (etcd by default), ensuring consistent configuration across all members.
Storage Structure:
/pigsty/ # namespace (patroni_namespace)
└── pg-meta/ # cluster name (pg_cluster)
├── config # cluster config (shared)
├── leader # current primary info
├── members/ # member registration
│ ├── pg-meta-1
│ └── pg-meta-2
└── ...
Rendering Flow:
- Init: Template (e.g.,
oltp.yml) rendered via Jinja2 to /etc/patroni/patroni.yml - Start: Patroni reads local config, writes PostgreSQL parameters to DCS
- Runtime: Patroni periodically syncs DCS config to local PostgreSQL
Local Cache:
Each Patroni instance caches DCS config locally at /pg/conf/<instance>.yml:
- On start: Load from DCS, cache locally
- Runtime: Periodically sync DCS to local cache
- DCS unavailable: Continue with local cache (no failover possible)
Config File Hierarchy
Patroni renders DCS config to local PostgreSQL config files:
/pg/data/
├── postgresql.conf # Main config (managed by Patroni)
├── postgresql.base.conf # Base config (via include directive)
├── postgresql.auto.conf # Instance overrides (ALTER SYSTEM)
├── pg_hba.conf # Client auth config
└── pg_ident.conf # User mapping config
Load Order (priority low to high):
postgresql.conf: Dynamically generated by Patroni with DCS cluster paramspostgresql.base.conf: Loaded via include, static base configpostgresql.auto.conf: Auto-loaded by PostgreSQL, instance overrides
Since postgresql.auto.conf loads last, its parameters override earlier files.
Instance Level
Instance-level parameters apply only to a single PostgreSQL instance, overriding cluster-level config.
These are written to postgresql.auto.conf, which loads last and can override any cluster parameter.
This is a powerful technique for setting instance-specific values:
- Set
hot_standby_feedback = on on replicas - Adjust
work_mem or maintenance_work_mem for specific instances - Set
recovery_min_apply_delay for delayed replicas
Using pg_parameters
In Pigsty config, use pg_parameters to define instance-level parameters:
pg-meta:
hosts:
10.10.10.10:
pg_seq: 1
pg_role: primary
pg_parameters: # instance-level params
log_statement: all # log all SQL for this instance only
vars:
pg_cluster: pg-meta
pg_parameters: # cluster default instance params
log_timezone: Asia/Shanghai
log_min_duration_statement: 1000
Use ./pgsql.yml -l <cls> -t pg_param to apply parameters, which renders to postgresql.auto.conf.
Override Hierarchy
pg_parameters can be defined at different Ansible config levels, priority low to high:
all:
vars:
pg_parameters: # global default
log_statement: none
children:
pg-meta:
vars:
pg_parameters: # cluster override
log_statement: ddl
hosts:
10.10.10.10:
pg_parameters: # instance override (highest)
log_statement: all
Using ALTER SYSTEM
You can also modify instance parameters at runtime via ALTER SYSTEM:
-- Set parameters
ALTER SYSTEM SET work_mem = '256MB';
ALTER SYSTEM SET log_min_duration_statement = 1000;
-- Reset to default
ALTER SYSTEM RESET work_mem;
ALTER SYSTEM RESET ALL; -- Reset all ALTER SYSTEM settings
-- Reload config to take effect
SELECT pg_reload_conf();
ALTER SYSTEM writes to postgresql.auto.conf.
Note: In Pigsty-managed clusters, postgresql.auto.conf is managed by Ansible via pg_parameters.
Manual ALTER SYSTEM changes may be overwritten on next playbook run.
Use pg_parameters in pigsty.yml for persistent instance-level params.
List-Type Parameters
PostgreSQL has special parameters accepting comma-separated lists. In YAML config,
the entire value must be quoted, otherwise YAML parses it as an array:
# Correct: quote the entire value
pg_parameters:
shared_preload_libraries: 'timescaledb, pg_stat_statements'
search_path: '"$user", public, app'
# Wrong: unquoted causes YAML parse error
pg_parameters:
shared_preload_libraries: timescaledb, pg_stat_statements # YAML parses as array!
Pigsty auto-detects these list parameters and renders them without outer quotes:
| Parameter | Description | Example Value |
|---|
shared_preload_libraries | Preload shared libs | 'timescaledb, pg_stat_statements' |
search_path | Schema search path | '"$user", public, app' |
local_preload_libraries | Local preload libs | 'auto_explain' |
session_preload_libraries | Session preload libs | 'pg_hint_plan' |
log_destination | Log output targets | 'csvlog, stderr' |
unix_socket_directories | Unix socket dirs | '/var/run/postgresql, /tmp' |
temp_tablespaces | Temp tablespaces | 'ssd_space, hdd_space' |
debug_io_direct | Direct I/O mode (PG16+) | 'data, wal' |
Rendering Example:
# pigsty.yml config (quotes required in YAML)
pg_parameters:
shared_preload_libraries: 'timescaledb, pg_stat_statements'
search_path: '"$user", public, app'
work_mem: 64MB
# Rendered postgresql.auto.conf (list params unquoted)
shared_preload_libraries = timescaledb, pg_stat_statements
search_path = "$user", public, app
work_mem = '64MB'
Database Level
Database-level parameters apply to all sessions connected to a specific database.
Implemented via ALTER DATABASE ... SET, stored in pg_db_role_setting.
Configuration
Use the parameters field in pg_databases:
pg_databases:
- name: analytics
owner: dbuser_analyst
parameters:
work_mem: 256MB # analytics needs more memory
maintenance_work_mem: 1GB # large table maintenance
statement_timeout: 10min # allow long queries
search_path: '"$user", public, mart' # list param needs quotes
Like instance-level params, list-type values must be quoted in YAML.
Rendering Rules
Database params are set via ALTER DATABASE ... SET. Pigsty auto-selects correct syntax:
List-type params (search_path, temp_tablespaces, local_preload_libraries, session_preload_libraries, log_destination) without outer quotes:
ALTER DATABASE "analytics" SET "search_path" = "$user", public, mart;
Scalar params with quoted values:
ALTER DATABASE "analytics" SET "work_mem" = '256MB';
ALTER DATABASE "analytics" SET "statement_timeout" = '10min';
Note: While log_destination is in the database whitelist, its context is sighup,
so it cannot take effect at database level. Configure it at instance level (pg_parameters).
View Database Params
-- View params for a specific database
SELECT datname, unnest(setconfig) AS setting
FROM pg_db_role_setting drs
JOIN pg_database d ON d.oid = drs.setdatabase
WHERE drs.setrole = 0 AND datname = 'analytics';
Manual Management
-- Set params
ALTER DATABASE analytics SET work_mem = '256MB';
ALTER DATABASE analytics SET search_path = "$user", public, myschema;
-- Reset params
ALTER DATABASE analytics RESET work_mem;
ALTER DATABASE analytics RESET ALL;
User Level
User-level parameters apply to all sessions of a specific database user.
Implemented via ALTER USER ... SET, also stored in pg_db_role_setting.
Configuration
Use the parameters field in pg_users or pg_default_roles:
pg_users:
- name: dbuser_analyst
password: DBUser.Analyst
parameters:
work_mem: 256MB # more memory for analytics
statement_timeout: 5min # allow longer queries
search_path: '"$user", public, analytics' # list param needs quotes
log_statement: all # log all SQL
Rendering Rules
Same as database-level:
List-type params (search_path, temp_tablespaces, local_preload_libraries, session_preload_libraries) without outer quotes:
ALTER USER "dbuser_analyst" SET "search_path" = "$user", public, analytics;
Scalar params with quoted values:
ALTER USER "dbuser_analyst" SET "work_mem" = '256MB';
ALTER USER "dbuser_analyst" SET "statement_timeout" = '5min';
DEFAULT Value
Use DEFAULT (case-insensitive) to reset a parameter to PostgreSQL default:
parameters:
work_mem: DEFAULT # reset to default
statement_timeout: 30s # set specific value
ALTER USER "dbuser_app" SET "work_mem" = DEFAULT;
ALTER USER "dbuser_app" SET "statement_timeout" = '30s';
View User Params
-- View params for a specific user
SELECT rolname, unnest(setconfig) AS setting
FROM pg_db_role_setting drs
JOIN pg_roles r ON r.oid = drs.setrole
WHERE rolname = 'dbuser_analyst';
Manual Management
-- Set params
ALTER USER dbuser_app SET work_mem = '128MB';
ALTER USER dbuser_app SET search_path = "$user", public, myschema;
-- Reset params
ALTER USER dbuser_app RESET work_mem;
ALTER USER dbuser_app RESET ALL;
Priority
When the same parameter is set at multiple levels, PostgreSQL applies this priority (low to high):
postgresql.conf ← Cluster params (Patroni DCS)
↓
postgresql.auto.conf ← Instance params (pg_parameters / ALTER SYSTEM)
↓
Database level ← ALTER DATABASE SET
↓
User level ← ALTER USER SET
↓
Session level ← SET command
Database vs User Priority:
When a user connects to a specific database and the same parameter is set at both levels,
PostgreSQL uses the user-level parameter since it has higher priority.
Example:
# Database: analytics has work_mem = 256MB
pg_databases:
- name: analytics
parameters:
work_mem: 256MB
# User: analyst has work_mem = 512MB
pg_users:
- name: analyst
parameters:
work_mem: 512MB
analyst connecting to analytics: work_mem = 512MB (user takes precedence)- Other users connecting to
analytics: work_mem = 256MB (database applies) analyst connecting to other DBs: work_mem = 512MB (user applies)