This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Conf Templates

Batteries-included configuration templates for specific scenarios, with detailed explanations.

Pigsty provides various ready-to-use configuration templates for different deployment scenarios.

You can specify a configuration template with the -c option during configure. If no template is specified, the default meta template is used.

CategoryTemplates
Solo Templatesmeta, rich, fat, slim, infra
Kernel Templatespgsql, citus, mssql, polar, ivory, mysql, pgtde, oriole, supabase
HA Templatesha/simu, ha/full, ha/safe, ha/trio, ha/dual
App Templatesapp/odoo, app/dify, app/electric, app/maybe, app/teable, app/registry
Misc Templatesdemo/el, demo/debian, demo/demo, demo/minio, build/oss, build/pro

1 - Solo Templates

2 - meta

Default single-node installation template with extensive configuration parameter descriptions

The meta configuration template is Pigsty’s default template, designed to fulfill Pigsty’s core functionality—deploying PostgreSQL—on a single node.

To maximize compatibility, meta installs only the minimum required software set to ensure it runs across all operating system distributions and architectures.


Overview

  • Config Name: meta
  • Node Count: Single node
  • Description: Default single-node installation template with extensive configuration parameter descriptions and minimum required feature set.
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

Usage: This is the default config template, so there’s no need to specify -c meta explicitly during configure:

./configure [-i <primary_ip>]

For example, if you want to install PostgreSQL 17 rather than the default 18, you can use the -v arg in configure:

./configure -v 17   # or 16,15,14,13....

Content

Source: pigsty/conf/meta.yml

---
#==============================================================#
# File      :   meta.yml
# Desc      :   Pigsty default 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the default 1-node configuration template, with:
# INFRA, NODE, PGSQL, ETCD, MINIO, DOCKER, APP (pgadmin)
# with basic pg extensions: postgis, pgvector
#
# Work with PostgreSQL 14-18 on all supported platform
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ postgis, pgvector ]

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: "pigsty meta database"
            schemas: [pigsty]
            # define extensions in database : https://doc.pgsty.com/pgsql/extension/create
            extensions: [ postgis, vector ]

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false   # disable in 1-node mode :  https://doc.pgsty.com/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    #minio:
    #  hosts:
    #    10.10.10.10: { minio_seq: 1 }
    #  vars:
    #    minio_cluster: minio
    #    minio_users:                      # list of minio user to be created
    #      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
    #      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
    #      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # launch example pgadmin app with: ./app.yml (http://10.10.10.10:8885 [email protected] / pigsty)
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]
              PGADMIN_DEFAULT_PASSWORD: pigsty


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      pgadmin : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      #minio  : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    node_repo_modules: 'node,infra,pgsql' # add these repos directly to the singleton node
    #node_repo_modules: local             # use this if you want to build & user local repo
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with the latest version

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    #pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    #pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
    #  local:                          # default pgbackrest repo with local posix fs
    #    path: /pg/backup              # local backup directory, `/pg/backup` by default
    #    retention_full_type: count    # retention full backups by count
    #    retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
    #  minio:                          # optional minio repo for pgbackrest
    #    type: s3                      # minio is s3-compatible, so s3 is used
    #    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    #    s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
    #    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    #    s3_key: pgbackrest            # minio user access key for pgbackrest
    #    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    #    s3_uri_style: path            # use path style uri for minio rather than host style
    #    path: /pgbackrest             # minio backup path, default is `/pgbackrest`
    #    storage_port: 9000            # minio port, 9000 by default
    #    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    #    block: y                      # Enable block incremental backup
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days
    #  s3: # aliyun oss (s3 compatible) object storage service
    #    type: s3                      # oss is s3-compatible
    #    s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
    #    s3_region: oss-cn-beijing
    #    s3_bucket: <your_bucket_name>
    #    s3_key: <your_access_key>
    #    s3_key_secret: <your_secret_key>
    #    s3_uri_style: host
    #    path: /pgbackrest
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The meta template is Pigsty’s default getting-started configuration, designed for quick onboarding.

Use Cases:

  • First-time Pigsty users
  • Quick deployment in development and testing environments
  • Small production environments running on a single machine
  • As a base template for more complex deployments

Key Features:

  • Online installation mode without building local software repository (repo_enabled: false)
  • Default installs PostgreSQL 18 with postgis and pgvector extensions
  • Includes complete monitoring infrastructure (Grafana, Prometheus, Loki, etc.)
  • Preconfigured Docker and pgAdmin application examples
  • MinIO backup storage disabled by default, can be enabled as needed

Notes:

  • Default passwords are sample passwords; must be changed for production environments
  • Single-node etcd has no high availability guarantee, suitable for development and testing
  • If you need to build a local software repository, use the rich template

3 - rich

Feature-rich single-node configuration with local software repository, all extensions, MinIO backup, and complete examples

The rich configuration template is an enhanced version of meta, designed for users who need to experience complete functionality.

If you want to build a local software repository, use MinIO for backup storage, run Docker applications, or need preconfigured business databases, use this template.


Overview

  • Config Name: rich
  • Node Count: Single node
  • Description: Feature-rich single-node configuration, adding local software repository, MinIO backup, complete extensions, Docker application examples on top of meta
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

This template’s main enhancements over meta:

  • Builds local software repository (repo_enabled: true), downloads all PG extensions
  • Enables single-node MinIO as PostgreSQL backup storage
  • Preinstalls TimescaleDB, pgvector, pg_wait_sampling and other extensions
  • Includes detailed user/database/service definition comment examples
  • Adds Redis primary-replica instance example
  • Preconfigures pg-test three-node HA cluster configuration stub

Usage:

./configure -c rich [-i <primary_ip>]

Content

Source: pigsty/conf/rich.yml

---
#==============================================================#
# File      :   rich.yml
# Desc      :   Pigsty feature-rich 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the enhanced version of default meta.yml, which has:
# - almost all available postgres extensions
# - build local software repo for entire env
# - 1 node minio used as central backup repo
# - cluster stub for 3-node pg-test / ferret / redis
# - stub for nginx, certs, and website self-hosting config
# - detailed comments for database / user / service
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c rich
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1

    #----------------------------------------------#
    # PGSQL HA Cluster Example: 3-node pg-test
    #----------------------------------------------#
    #pg-test:
    #  hosts:
    #    10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
    #    10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
    #    10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
    #  vars:
    #    pg_cluster: pg-test           # define pgsql cluster name
    #    pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
    #    pg_databases: [{ name: test }]
    #    # define business service here: https://doc.pgsty.com/pgsql/service
    #    pg_services:                        # extra services in addition to pg_default_services, array of service definition
    #      # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
    #      - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
    #        port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
    #        ip: "*"                         # optional, service bind ip address, `*` for all ip by default
    #        selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
    #        dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
    #        check: /sync                    # optional, health check url path, / by default
    #        backup: "[? pg_role == `primary`]"  # backup server selector
    #        maxconn: 3000                   # optional, max allowed front-end connection
    #        balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
    #        options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
    #    pg_vip_enabled: true
    #    pg_vip_address: 10.10.10.3/24
    #    pg_vip_interface: eth1
    #    node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
    #      - '00 01 * * 1 postgres /pg/bin/pg-backup full'
    #      - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true    # build local repo, and install everything from it:  https://doc.pgsty.com/admin/repo
        # and download all extensions into local repo
        repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"

    #----------------------------------------------#
    # REDIS : https://doc.pgsty.com/redis
    #----------------------------------------------#
    # OPTIONAL, launch redis clusters with: ./redis.yml
    redis-ms:
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }



  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # infra services exposed via portal
      home      : { domain: i.pigsty }     # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      minio     : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - '${admin_ip} i.pigsty sss.pigsty'
      - '${admin_ip} adm.pigsty ddl.pigsty repo.pigsty supa.pigsty'
    node_repo_modules: local              # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
...

Explanation

The rich template is Pigsty’s complete functionality showcase configuration, suitable for users who want to deeply experience all features.

Use Cases:

  • Offline environments requiring local software repository
  • Environments needing MinIO as PostgreSQL backup storage
  • Pre-planning multiple business databases and users
  • Running Docker applications (pgAdmin, Bytebase, etc.)
  • Learners wanting to understand complete configuration parameter usage

Main Differences from meta:

  • Enables local software repository building (repo_enabled: true)
  • Enables MinIO storage backup (pgbackrest_method: minio)
  • Preinstalls TimescaleDB, pg_wait_sampling and other additional extensions
  • Includes detailed parameter comments for understanding configuration meanings
  • Preconfigures HA cluster stub configuration (pg-test)

Notes:

  • Some extensions unavailable on ARM64 architecture, adjust as needed
  • Building local software repository requires longer time and larger disk space
  • Default passwords are sample passwords, must be changed for production

4 - slim

Minimal installation template without monitoring infrastructure, installs PostgreSQL directly from internet

The slim configuration template provides minimal installation capability, installing a PostgreSQL high-availability cluster directly from the internet without deploying Infra monitoring infrastructure.

When you only need an available database instance without the monitoring system, consider using the Slim Installation mode.


Overview

  • Config Name: slim
  • Node Count: Single node
  • Description: Minimal installation template without monitoring infrastructure, installs PostgreSQL directly
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c slim [-i <primary_ip>]
./slim.yml   # Execute slim installation

Content

Source: pigsty/conf/slim.yml

---
#==============================================================#
# File      :   slim.yml
# Desc      :   Pigsty slim installation config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for slim / minimal installation
# No monitoring & infra will be installed, just raw postgresql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c slim
#   ./slim.yml

all:
  children:

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        #10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        #10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        #10.10.10.11: { pg_seq: 2, pg_role: replica } # you can add more!
        #10.10.10.12: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The slim template is Pigsty’s minimal installation configuration, designed for quick deployment of bare PostgreSQL clusters.

Use Cases:

  • Only need PostgreSQL database, no monitoring system required
  • Resource-limited small servers or edge devices
  • Quick deployment of temporary test databases
  • Already have monitoring system, only need PostgreSQL HA cluster

Key Features:

  • Uses slim.yml playbook instead of deploy.yml for installation
  • Installs software directly from internet, no local software repository
  • Retains core PostgreSQL HA capability (Patroni + etcd + HAProxy)
  • Minimized package downloads, faster installation
  • Default uses PostgreSQL 18

Differences from meta:

  • slim uses dedicated slim.yml playbook, skips Infra module installation
  • Faster installation, less resource usage
  • Suitable for “just need a database” scenarios

Notes:

  • After slim installation, cannot view database status through Grafana
  • If monitoring is needed, use meta or rich template
  • Can add replicas as needed for high availability

5 - fat

Feature-All-Test template, single-node installation of all extensions, builds local repo with PG 13-18 all versions

The fat configuration template is Pigsty’s Feature-All-Test template, installing all extension plugins on a single node and building a local software repository containing all extensions for PostgreSQL 13-18 (six major versions).

This is a full-featured configuration for testing and development, suitable for scenarios requiring complete software package cache or testing all extensions.


Overview

  • Config Name: fat
  • Node Count: Single node
  • Description: Feature-All-Test template, installs all extensions, builds local repo with PG 13-18 all versions
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

Usage:

./configure -c fat [-i <primary_ip>]

To specify a particular PostgreSQL version:

./configure -c fat -v 17   # Use PostgreSQL 17

Content

Source: pigsty/conf/fat.yml

---
#==============================================================#
# File      :   fat.yml
# Desc      :   Pigsty Feature-All-Test config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the 4-node sandbox for pigsty
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c fat [-v 18|17|16|15]
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://doc.pgsty.com/pgsql/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        # define HBA rules : https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }

        # define backup policies: https://doc.pgsty.com/pgsql/backup
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every day 1am

        # define (OPTIONAL) L2 VIP that bind to primary
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true # build local repo:  https://doc.pgsty.com/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        repo_packages: [
          node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
          pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
          pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
          pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
          pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
          pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
          pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
          infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
        ]

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://doc.pgsty.com/docker
    # APP    : https://doc.pgsty.com/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # domain names and upstream servers
      home         : { domain: i.pigsty }
      pgadmin      : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase     : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" ,websocket: true}
      minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: true              # overwrite node hostname on multi-node template
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty
      - 10.10.10.10 adm.pigsty ddl.pigsty repo.pigsty supa.pigsty
    node_repo_modules: local,node,infra,pgsql # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ] # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The fat template is Pigsty’s full-featured test configuration, designed for completeness testing and offline package building.

Key Features:

  • All Extensions: Installs all categorized extension packages for PostgreSQL 18
  • Multi-version Repository: Local repo contains all six major versions of PostgreSQL 13-18
  • Complete Component Stack: Includes MinIO backup, Docker applications, VIP, etc.
  • Enterprise Components: Includes Kafka, PolarDB, IvorySQL, TigerBeetle, etc.

Repository Contents:

CategoryDescription
PostgreSQL 13-18Six major versions’ kernels and all extensions
Extension Categoriestime, gis, rag, fts, olap, feat, lang, type, util, func, admin, stat, sec, fdw, sim, etl
Enterprise ComponentsKafka, Java Runtime, Sealos, TigerBeetle
Database KernelsPolarDB, IvorySQL

Differences from rich:

  • fat contains all six versions of PostgreSQL 13-18, rich only contains current default version
  • fat contains additional enterprise components (Kafka, PolarDB, IvorySQL, etc.)
  • fat requires larger disk space and longer build time

Use Cases:

  • Pigsty development testing and feature validation
  • Building complete multi-version offline software packages
  • Testing all extension compatibility scenarios
  • Enterprise environments pre-caching all software packages

Notes:

  • Requires large disk space (100GB+ recommended) for storing all packages
  • Building local software repository requires longer time
  • Some extensions unavailable on ARM64 architecture
  • Default passwords are sample passwords, must be changed for production

6 - infra

Only installs observability infrastructure, dedicated template without PostgreSQL and etcd

The infra configuration template only deploys Pigsty’s observability infrastructure components (VictoriaMetrics/Grafana/Loki/Nginx, etc.), without PostgreSQL and etcd.

Suitable for scenarios requiring a standalone monitoring stack, such as monitoring external PostgreSQL/RDS instances or other data sources.


Overview

  • Config Name: infra
  • Node Count: Single or multiple nodes
  • Description: Only installs observability infrastructure, without PostgreSQL and etcd
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c infra [-i <primary_ip>]
./infra.yml    # Only execute infra playbook

Content

Source: pigsty/conf/infra.yml

---
#==============================================================#
# File      :   infra.yml
# Desc      :   Infra Only Config
# Ctime     :   2025-12-16
# Mtime     :   2025-12-30
# Docs      :   https://doc.pgsty.com/infra
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for deploy victoria stack alone
# tutorial: https://doc.pgsty.com/infra
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c infra
#   ./infra.yml

all:
  children:
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
        #10.10.10.11: { infra_seq: 2 } # you can add more nodes if you want
        #10.10.10.12: { infra_seq: 3 } # don't forget to assign unique infra_seq for each node
      vars:
        docker_enabled: true            # enabled docker with ./docker.yml
        docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        pg_exporters:     # bin/pgmon-add pg-rds
          20001: { pg_cluster: pg-rds ,pg_seq: 1 ,pg_host: 10.10.10.10 ,pg_exporter_url: 'postgres://postgres:[email protected]:5432/postgres' }

  vars:                                 # global variables
    version: v4.0.0                     # pigsty version string
    admin_ip: 10.10.10.10               # admin node ip address
    region: default                     # upstream mirror region: default,china,europe
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    infra_portal:                       # infra services exposed via portal
      home : { domain: i.pigsty }       # default domain name
    repo_enabled: false                 # online installation without repo
    node_repo_modules: node,infra,pgsql # add these repos directly
    #haproxy_enabled: false              # enable haproxy on infra node?
    #vector_enabled: false               # enable vector on infra node?

    # DON't FORGET TO CHANGE DEFAULT PASSWORDS!
    grafana_admin_password: pigsty
...

Explanation

The infra template is Pigsty’s pure monitoring stack configuration, designed for standalone deployment of observability infrastructure.

Use Cases:

  • Monitoring external PostgreSQL instances (RDS, self-hosted, etc.)
  • Need standalone monitoring/alerting platform
  • Already have PostgreSQL clusters, only need to add monitoring
  • As a central console for multi-cluster monitoring

Included Components:

  • VictoriaMetrics: Time series database for storing metrics
  • VictoriaLogs: Log aggregation system
  • VictoriaTraces: Distributed tracing system
  • Grafana: Visualization dashboards
  • Alertmanager: Alert management
  • Nginx: Reverse proxy and web entry

Not Included:

  • PostgreSQL database cluster
  • etcd distributed coordination service
  • MinIO object storage

Monitoring External Instances: After configuration, add monitoring for external PostgreSQL instances via the pgsql-monitor.yml playbook:

pg_exporters:
  20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.100 }
  20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.101 }

Notes:

  • This template will not install any databases
  • For full functionality, use meta or rich template
  • Can add multiple infra nodes for high availability as needed

7 - Kernel Templates

8 - pgsql

Native PostgreSQL kernel, supports deployment of PostgreSQL versions 13 to 18

The pgsql configuration template uses the native PostgreSQL kernel, which is Pigsty’s default database kernel, supporting PostgreSQL versions 13 to 18.


Overview

  • Config Name: pgsql
  • Node Count: Single node
  • Description: Native PostgreSQL kernel configuration template
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c pgsql [-i <primary_ip>]

To specify a particular PostgreSQL version (e.g., 17):

./configure -c pgsql -v 17

Content

Source: pigsty/conf/pgsql.yml

---
#==============================================================#
# File      :   pgsql.yml
# Desc      :   1-node PostgreSQL Config template
# Ctime     :   2025-02-23
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for basical PostgreSQL Kernel.
# Nothing special, just a basic setup with one node.
# tutorial: https://doc.pgsty.com/pgsql/kernel/postgres
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgsql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ postgis, timescaledb, vector ]}
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The pgsql template is Pigsty’s standard kernel configuration, using community-native PostgreSQL.

Version Support:

  • PostgreSQL 18 (default)
  • PostgreSQL 17, 16, 15, 14, 13

Use Cases:

  • Need to use the latest PostgreSQL features
  • Need the widest extension support
  • Standard production environment deployment
  • Same functionality as meta template, explicitly declaring native kernel usage

Differences from meta:

  • pgsql template explicitly declares using native PostgreSQL kernel
  • Suitable for scenarios needing clear distinction between different kernel types

9 - citus

Citus distributed PostgreSQL cluster, provides horizontal scaling and sharding capabilities

The citus configuration template deploys a distributed PostgreSQL cluster using the Citus extension, providing transparent horizontal scaling and data sharding capabilities.


Overview

  • Config Name: citus
  • Node Count: Five nodes (1 coordinator + 4 data nodes)
  • Description: Citus distributed PostgreSQL cluster
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c citus [-i <primary_ip>]

Content

Source: pigsty/conf/citus.yml

---
#==============================================================#
# File      :   citus.yml
# Desc      :   1-node Citus (Distributive) Config Template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Citus Distributive Cluster
# tutorial: https://doc.pgsty.com/pgsql/kernel/citus
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c citus
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # pg-citus: 10 node citus cluster
    #----------------------------------------------#
    pg-citus: # the citus group contains 5 clusters
      hosts:
        10.10.10.10: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.11: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.12: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.13: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.14: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.15: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.16: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.17: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 1, pg_role: replica }
        #10.10.10.18: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 0, pg_role: primary }
        #10.10.10.19: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 1, pg_role: replica }
      vars:
        pg_mode: citus                            # pgsql cluster mode: citus
        pg_shard: pg-citus                        # citus shard name: pg-citus
        pg_primary_db: citus                      # primary database used by citus
        pg_dbsu_password: DBUser.Postgres         # enable dbsu password access for citus
        pg_extensions: [ citus, postgis, pgvector, topn, pg_cron, hll ]  # install these extensions
        pg_libs: 'citus, pg_cron, pg_stat_statements' # citus will be added by patroni automatically
        pg_users: [{ name: dbuser_citus ,password: DBUser.Citus ,pgbouncer: true ,roles: [ dbrole_admin ]    }]
        pg_databases: [{ name: citus ,owner: dbuser_citus ,extensions: [ citus, vector, topn, pg_cron, hll ] }]
        pg_parameters:
          cron.database_name: citus
          citus.node_conninfo: 'sslrootcert=/pg/cert/ca.crt sslmode=verify-full'
        pg_hba_rules:
          - { user: 'all' ,db: all  ,addr: 127.0.0.1/32  ,auth: ssl ,title: 'all user ssl access from localhost' }
          - { user: 'all' ,db: all  ,addr: intra         ,auth: ssl ,title: 'all user ssl access from intranet'  }
        pg_vip_enabled: true                      # enable vip for citus cluster
        pg_vip_interface: eth1                    # vip interface for all members (you can override this in each host)

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: true            # overwrite hostname since this is a multi-node tempalte
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 17                      # Default PostgreSQL Major Version is 17
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-olap ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl]
    #repo_extra_packages: [ pgsql-main, citus, postgis, pgvector, pg_cron, hll, topn ]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The citus template deploys a Citus distributed PostgreSQL cluster, suitable for large-scale data scenarios requiring horizontal scaling.

Key Features:

  • Transparent data sharding, automatically distributes data to multiple nodes
  • Parallel query execution, aggregates results from multiple nodes
  • Supports distributed transactions (2PC)
  • Maintains PostgreSQL SQL compatibility

Architecture:

  • Coordinator Node (pg-citus0): Receives queries, routes to data nodes
  • Data Nodes (pg-citus1~3): Stores sharded data

Use Cases:

  • Single table data volume exceeds single-node capacity
  • Need horizontal scaling for write and query performance
  • Multi-tenant SaaS applications
  • Real-time analytical workloads

Notes:

  • Citus supports PostgreSQL 14~17
  • Distributed tables require specifying a distribution column
  • Some PostgreSQL features may be limited (e.g., cross-shard foreign keys)
  • ARM64 architecture not supported

10 - mssql

WiltonDB / Babelfish kernel, provides Microsoft SQL Server protocol and syntax compatibility

The mssql configuration template uses WiltonDB / Babelfish database kernel instead of native PostgreSQL, providing Microsoft SQL Server wire protocol (TDS) and T-SQL syntax compatibility.

For the complete tutorial, see: Babelfish (MSSQL) Kernel Guide


Overview

  • Config Name: mssql
  • Node Count: Single node
  • Description: WiltonDB / Babelfish configuration template, provides SQL Server protocol compatibility
  • OS Distro: el8, el9, el10, u22, u24 (Debian not available)
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c mssql [-i <primary_ip>]

Content

Source: pigsty/conf/mssql.yml

---
#==============================================================#
# File      :   mssql.yml
# Desc      :   Babelfish: WiltonDB (MSSQL Compatible) template
# Ctime     :   2020-08-01
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Babelfish Kernel (WiltonDB),
# Which is a PostgreSQL 15 fork with SQL Server Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/babelfish
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mssql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Babelfish Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_mssql ,password: DBUser.MSSQL ,superuser: true, pgbouncer: true ,roles: [dbrole_admin], comment: superuser & owner for babelfish  }
        pg_databases:
          - name: mssql
            baseline: mssql.sql
            extensions: [uuid-ossp, babelfishpg_common, babelfishpg_tsql, babelfishpg_tds, babelfishpg_money, pg_hint_plan, system_stats, tds_fdw]
            owner: dbuser_mssql
            parameters: { 'babelfishpg_tsql.migration_mode' : 'multi-db' }
            comment: babelfish cluster, a MSSQL compatible pg cluster
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # Babelfish / WiltonDB Ad Hoc Settings
        pg_mode: mssql                     # Microsoft SQL Server Compatible Mode
        pg_version: 15
        pg_packages: [ wiltondb, pgsql-common, sqlcmd ]
        pg_libs: 'babelfishpg_tds, pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        pg_default_hba_rules: # overwrite default HBA rules for babelfish cluster, order by `order`
          - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident' ,order: 100}
          - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
          - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost' ,order: 200}
          - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
          - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
          - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
          - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password' ,order: 400}
          - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl' ,order: 450}
          - { user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd' ,order: 500}
          - { user: dbuser_mssql ,db: mssql       ,addr: intra     ,auth: md5   ,title: 'allow mssql dbsu intranet access' ,order: 525} # <--- use md5 auth method for mssql user
          - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket' ,order: 550}
          - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password' ,order: 600}
          - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet' ,order: 650}
        pg_default_services: # route primary & replica service to mssql port 1433
          - { name: primary ,port: 5433 ,dest: 1433  ,check: /primary   ,selector: "[]" }
          - { name: replica ,port: 5434 ,dest: 1433  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
          - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
          - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]" }

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false                 # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,mssql # extra mssql repo is required
    node_tune: oltp                           # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 15                            # Babelfish kernel is compatible with postgres 15
    pg_conf: oltp.yml                         # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The mssql template allows you to use SQL Server Management Studio (SSMS) or other SQL Server client tools to connect to PostgreSQL.

Key Features:

  • Uses TDS protocol (port 1433), compatible with SQL Server clients
  • Supports T-SQL syntax, low migration cost
  • Retains PostgreSQL’s ACID properties and extension ecosystem
  • Supports multi-db and single-db migration modes

Connection Methods:

# Using sqlcmd command line tool
sqlcmd -S 10.10.10.10,1433 -U dbuser_mssql -P DBUser.MSSQL -d mssql

# Using SSMS or Azure Data Studio
# Server: 10.10.10.10,1433
# Authentication: SQL Server Authentication
# Login: dbuser_mssql
# Password: DBUser.MSSQL

Use Cases:

  • Migrating from SQL Server to PostgreSQL
  • Applications needing to support both SQL Server and PostgreSQL clients
  • Leveraging PostgreSQL ecosystem while maintaining T-SQL compatibility

Notes:

  • WiltonDB is based on PostgreSQL 15, does not support higher version features
  • Some T-SQL syntax may have compatibility differences, refer to Babelfish compatibility documentation
  • Must use md5 authentication method (not scram-sha-256)

11 - polar

PolarDB for PostgreSQL kernel, provides Aurora-style storage-compute separation capability

The polar configuration template uses Alibaba Cloud’s PolarDB for PostgreSQL database kernel instead of native PostgreSQL, providing “cloud-native” Aurora-style storage-compute separation capability.

For the complete tutorial, see: PolarDB for PostgreSQL (POLAR) Kernel Guide


Overview

  • Config Name: polar
  • Node Count: Single node
  • Description: Uses PolarDB for PostgreSQL kernel
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c polar [-i <primary_ip>]

Content

Source: pigsty/conf/polar.yml

---
#==============================================================#
# File      :   polar.yml
# Desc      :   Pigsty 1-node PolarDB Kernel Config Template
# Ctime     :   2020-08-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for PolarDB PG Kernel,
# Which is a PostgreSQL 15 fork with RAC flavor features
# tutorial: https://doc.pgsty.com/pgsql/kernel/polardb
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c polar
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PolarDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # PolarDB Ad Hoc Settings
        pg_version: 15                            # PolarDB PG is based on PG 15
        pg_mode: polar                            # PolarDB PG Compatible mode
        pg_packages: [ polardb, pgsql-common ]    # Replace PG kernel with PolarDB kernel
        pg_exporter_exclude_database: 'template0,template1,postgres,polardb_admin'
        pg_default_roles:                         # PolarDB require replicator as superuser
          - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
          - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
          - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
          - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
          - { name: postgres     ,superuser: true  ,comment: system superuser }
          - { name: replicator   ,superuser: true  ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator } # <- superuser is required for replication
          - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
          - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 15                      # PolarDB is compatible with PG 15
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

Explanation

The polar template uses Alibaba Cloud’s open-source PolarDB for PostgreSQL kernel, providing cloud-native database capabilities.

Key Features:

  • Storage-compute separation architecture, compute and storage nodes can scale independently
  • Supports one-write-multiple-read, read replicas scale in seconds
  • Compatible with PostgreSQL ecosystem, maintains SQL compatibility
  • Supports shared storage scenarios, suitable for cloud environment deployment

Use Cases:

  • Cloud-native scenarios requiring storage-compute separation architecture
  • Read-heavy write-light workloads
  • Scenarios requiring quick scaling of read replicas
  • Test environments for evaluating PolarDB features

Notes:

  • PolarDB is based on PostgreSQL 15, does not support higher version features
  • Replication user requires superuser privileges (different from native PostgreSQL)
  • Some PostgreSQL extensions may have compatibility issues
  • ARM64 architecture not supported

12 - ivory

IvorySQL kernel, provides Oracle syntax and PL/SQL compatibility

The ivory configuration template uses Highgo’s IvorySQL database kernel instead of native PostgreSQL, providing Oracle syntax and PL/SQL compatibility.

For the complete tutorial, see: IvorySQL (Oracle Compatible) Kernel Guide


Overview

  • Config Name: ivory
  • Node Count: Single node
  • Description: Uses IvorySQL Oracle-compatible kernel
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c ivory [-i <primary_ip>]

Content

Source: pigsty/conf/ivory.yml

---
#==============================================================#
# File      :   ivory.yml
# Desc      :   IvorySQL 4 (Oracle Compatible) template
# Ctime     :   2024-08-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/pgsql/kernel/ivorysql
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for IvorySQL 5 Kernel,
# Which is a PostgreSQL 18 fork with Oracle Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/ivorysql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c ivory
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # IvorySQL Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # IvorySQL Ad Hoc Settings
        pg_mode: ivory                                                 # Use IvorySQL Oracle Compatible Mode
        pg_packages: [ ivorysql, pgsql-common ]                        # install IvorySQL instead of postgresql kernel
        pg_libs: 'liboracle_parser, pg_stat_statements, auto_explain'  # pre-load oracle parser

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # IvorySQL kernel is compatible with postgres 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ivory template uses Highgo’s open-source IvorySQL kernel, providing Oracle database compatibility.

Key Features:

  • Supports Oracle PL/SQL syntax
  • Compatible with Oracle data types (NUMBER, VARCHAR2, etc.)
  • Supports Oracle-style packages
  • Retains all standard PostgreSQL functionality

Use Cases:

  • Migrating from Oracle to PostgreSQL
  • Applications needing both Oracle and PostgreSQL syntax support
  • Leveraging PostgreSQL ecosystem while maintaining PL/SQL compatibility
  • Test environments for evaluating IvorySQL features

Notes:

  • IvorySQL 4 is based on PostgreSQL 18
  • Using liboracle_parser requires loading into shared_preload_libraries
  • pgbackrest may have checksum issues in Oracle-compatible mode, PITR capability is limited
  • Only supports EL8/EL9 systems, Debian/Ubuntu not supported

13 - mysql

OpenHalo kernel, provides MySQL protocol and syntax compatibility

The mysql configuration template uses OpenHalo database kernel instead of native PostgreSQL, providing MySQL wire protocol and SQL syntax compatibility.


Overview

  • Config Name: mysql
  • Node Count: Single node
  • Description: OpenHalo MySQL-compatible kernel configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c mysql [-i <primary_ip>]

Content

Source: pigsty/conf/mysql.yml

---
#==============================================================#
# File      :   mysql.yml
# Desc      :   1-node OpenHaloDB (MySQL Compatible) template
# Ctime     :   2025-04-03
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OpenHalo PG Kernel,
# Which is a PostgreSQL 14 fork with MySQL Wire Compatibility
# tutorial: https://doc.pgsty.com/pgsql/kernel/openhalo
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mysql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OpenHalo Database Cluster
    #----------------------------------------------#
    # connect with mysql client: mysql -h 10.10.10.10 -u dbuser_meta -D mysql (the actual database is 'postgres', and 'mysql' is a schema)
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: postgres, extensions: [aux_mysql]} # the mysql compatible database
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # OpenHalo Ad Hoc Setting
        pg_mode: mysql                    # MySQL Compatible Mode by HaloDB
        pg_version: 14                    # The current HaloDB is compatible with PG Major Version 14
        pg_packages: [ openhalodb, pgsql-common ]  # install openhalodb instead of postgresql kernel

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 14                      # OpenHalo is compatible with PG 14
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The mysql template uses the OpenHalo kernel, allowing you to connect to PostgreSQL using MySQL client tools.

Key Features:

  • Uses MySQL protocol (port 3306), compatible with MySQL clients
  • Supports a subset of MySQL SQL syntax
  • Retains PostgreSQL’s ACID properties and storage engine
  • Supports both PostgreSQL and MySQL protocol connections simultaneously

Connection Methods:

# Using MySQL client
mysql -h 10.10.10.10 -P 3306 -u dbuser_meta -pDBUser.Meta

# Also retains PostgreSQL connection capability
psql postgres://dbuser_meta:[email protected]:5432/meta

Use Cases:

  • Migrating from MySQL to PostgreSQL
  • Applications needing to support both MySQL and PostgreSQL clients
  • Leveraging PostgreSQL ecosystem while maintaining MySQL compatibility

Notes:

  • OpenHalo is based on PostgreSQL 14, does not support higher version features
  • Some MySQL syntax may have compatibility differences
  • Only supports EL8/EL9 systems
  • ARM64 architecture not supported

14 - pgtde

Percona PostgreSQL kernel, provides Transparent Data Encryption (pg_tde) capability

The pgtde configuration template uses Percona PostgreSQL database kernel, providing Transparent Data Encryption (TDE) capability.


Overview

  • Config Name: pgtde
  • Node Count: Single node
  • Description: Percona PostgreSQL transparent data encryption configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c pgtde [-i <primary_ip>]

Content

Source: pigsty/conf/pgtde.yml

---
#==============================================================#
# File      :   pgtde.yml
# Desc      :   PG TDE with Percona PostgreSQL 1-node template
# Ctime     :   2025-07-04
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Percona PostgreSQL Distribution
# With pg_tde extension, which is compatible with PostgreSQL 18.1
# tutorial: https://doc.pgsty.com/pgsql/kernel/percona
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgtde
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Percona Postgres Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: pigsty tde database
            schemas: [pigsty]
            extensions: [ vector, postgis, pg_tde ,pgaudit, { name: pg_stat_monitor, schema: monitor } ]
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # Percona PostgreSQL TDE Ad Hoc Settings
        pg_packages: [ percona-main, pgsql-common ]  # install percona postgres packages
        pg_libs: 'pg_tde, pgaudit, pg_stat_statements, pg_stat_monitor, auto_explain'

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,percona
    node_tune: oltp

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default Percona TDE PG Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The pgtde template uses Percona PostgreSQL kernel, providing enterprise-grade transparent data encryption capability.

Key Features:

  • Transparent Data Encryption: Data automatically encrypted on disk, transparent to applications
  • Key Management: Supports local keys and external Key Management Systems (KMS)
  • Table-level Encryption: Selectively encrypt sensitive tables
  • Full Compatibility: Fully compatible with native PostgreSQL

Use Cases:

  • Meeting data security compliance requirements (e.g., PCI-DSS, HIPAA)
  • Storing sensitive data (e.g., personal information, financial data)
  • Scenarios requiring data-at-rest encryption
  • Enterprise environments with strict data security requirements

Usage:

-- Create encrypted table
CREATE TABLE sensitive_data (
    id SERIAL PRIMARY KEY,
    ssn VARCHAR(11)
) USING pg_tde;

-- Or enable encryption on existing table
ALTER TABLE existing_table SET ACCESS METHOD pg_tde;

Notes:

  • Percona PostgreSQL is based on PostgreSQL 18
  • Encryption brings some performance overhead (typically 5-15%)
  • Encryption keys must be properly managed
  • ARM64 architecture not supported

15 - oriole

OrioleDB kernel, provides bloat-free OLTP enhanced storage engine

The oriole configuration template uses OrioleDB storage engine instead of PostgreSQL’s default Heap storage, providing bloat-free, high-performance OLTP capability.


Overview

  • Config Name: oriole
  • Node Count: Single node
  • Description: OrioleDB bloat-free storage engine configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c oriole [-i <primary_ip>]

Content

Source: pigsty/conf/oriole.yml

---
#==============================================================#
# File      :   oriole.yml
# Desc      :   1-node OrioleDB (OLTP Enhancement) template
# Ctime     :   2025-04-05
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OrioleDB Kernel,
# Which is a Patched PostgreSQL 17 fork without bloat
# tutorial: https://doc.pgsty.com/pgsql/kernel/oriole
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c oriole
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OrioleDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty], extensions: [orioledb]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

        # OrioleDB Ad Hoc Settings
        pg_mode: oriole                                         # oriole compatible mode
        pg_packages: [ oriole, pgsql-common ]                   # install OrioleDB kernel
        pg_libs: 'orioledb, pg_stat_statements, auto_explain'   # Load OrioleDB Extension

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 17                      # OrioleDB Kernel is based on PG 17
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The oriole template uses OrioleDB storage engine, fundamentally solving PostgreSQL table bloat problems.

Key Features:

  • Bloat-free Design: Uses UNDO logs instead of Multi-Version Concurrency Control (MVCC)
  • No VACUUM Required: Eliminates performance jitter from autovacuum
  • Row-level WAL: More efficient logging and replication
  • Compressed Storage: Built-in data compression, reduces storage space

Use Cases:

  • High-frequency update OLTP workloads
  • Applications sensitive to write latency
  • Need for stable response times (eliminates VACUUM impact)
  • Large tables with frequent updates causing bloat

Usage:

-- Create table using OrioleDB storage
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    customer_id INT,
    amount DECIMAL(10,2)
) USING orioledb;

-- Existing tables cannot be directly converted, need to be rebuilt

Notes:

  • OrioleDB is based on PostgreSQL 17
  • Need to add orioledb to shared_preload_libraries
  • Some PostgreSQL features may not be fully supported
  • ARM64 architecture not supported

16 - supabase

Self-host Supabase using Pigsty-managed PostgreSQL, an open-source Firebase alternative

The supabase configuration template provides a reference configuration for self-hosting Supabase, using Pigsty-managed PostgreSQL as the underlying storage.

For more details, see Supabase Self-Hosting Tutorial


Overview

  • Config Name: supabase
  • Node Count: Single node
  • Description: Self-host Supabase using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, rich

Usage:

./configure -c supabase [-i <primary_ip>]

Content

Source: pigsty/conf/supabase.yml

---
#==============================================================#
# File      :   supabase.yml
# Desc      :   Pigsty configuration for self-hosting supabase
# Ctime     :   2023-09-19
# Mtime     :   2025-12-28
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# supabase is available on el8/el9/u22/u24/d12 with pg15,16,17,18
# tutorial: https://doc.pgsty.com/app/supabase
# Usage:
#   curl https://repo.pigsty.io/get | bash    # install pigsty
#   ./configure -c supabase   # use this supabase conf template
#   ./deploy.yml              # install pigsty & pgsql & minio
#   ./docker.yml              # install docker & docker compose
#   ./app.yml                 # launch supabase with docker compose

all:
  children:


    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false    # disable local repo

    #----------------------------------------------#
    # ETCD : https://doc.pgsty.com/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false  # enable to prevent purging running etcd instance

    #----------------------------------------------#
    # MINIO : https://doc.pgsty.com/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # PostgreSQL cluster for Supabase self-hosting
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          # supabase roles: anon, authenticated, dashboard_user
          - { name: anon           ,login: false }
          - { name: authenticated  ,login: false }
          - { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
          - { name: service_role   ,login: false ,bypassrls: true }
          # supabase users: please use the same password
          - { name: supabase_admin             ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true   ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
          - { name: authenticator              ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
          - { name: supabase_auth_admin        ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_storage_admin     ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
          - { name: supabase_functions_admin   ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
          - { name: supabase_etl_admin         ,password: 'DBUser.Supa' ,replication: true ,roles: [ pg_read_all_data ]}
          - { name: supabase_read_only_user    ,password: 'DBUser.Supa' ,bypassrls: true ,roles:   [ pg_read_all_data, dbrole_readonly ]}
        pg_databases:
          - name: postgres
            baseline: supabase.sql
            owner: supabase_admin
            comment: supabase postgres database
            schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
            extensions:
              - { name: pgcrypto         ,schema: extensions } # cryptographic functions
              - { name: pg_net           ,schema: extensions } # async HTTP
              - { name: pgjwt            ,schema: extensions } # json web token API for postgres
              - { name: uuid-ossp        ,schema: extensions } # generate universally unique identifiers (UUIDs)
              - { name: pgsodium         ,schema: extensions } # pgsodium is a modern cryptography library for Postgres.
              - { name: supabase_vault   ,schema: extensions } # Supabase Vault Extension
              - { name: pg_graphql       ,schema: extensions } # pg_graphql: GraphQL support
              - { name: pg_jsonschema    ,schema: extensions } # pg_jsonschema: Validate json schema
              - { name: wrappers         ,schema: extensions } # wrappers: FDW collections
              - { name: http             ,schema: extensions } # http: allows web page retrieval inside the database.
              - { name: pg_cron          ,schema: extensions } # pg_cron: Job scheduler for PostgreSQL
              - { name: timescaledb      ,schema: extensions } # timescaledb: Enables scalable inserts and complex queries for time-series data
              - { name: pg_tle           ,schema: extensions } # pg_tle: Trusted Language Extensions for PostgreSQL
              - { name: vector           ,schema: extensions } # pgvector: the vector similarity search
              - { name: pgmq             ,schema: extensions } # pgmq: A lightweight message queue like AWS SQS and RSMQ
          - { name: supabase ,owner: supabase_admin ,comment: supabase analytics database ,schemas: [ extensions, _analytics ] }

        # supabase required extensions
        pg_libs: 'timescaledb, pgsodium, plpgsql, plpgsql_check, pg_cron, pg_net, pg_stat_statements, auto_explain, pg_wait_sampling, pg_tle, plan_filter'
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_parameters: { cron.database_name: postgres }
        pg_hba_rules: # supabase hba rules, require access from docker network
          - { user: all ,db: postgres  ,addr: intra         ,auth: pwd ,title: 'allow supabase access from intranet'    }
          - { user: all ,db: postgres  ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' }
        node_crontab:
          - '00 01 * * * postgres /pg/bin/pg-backup full'  # make a full backup every 1am
          - '*  *  * * * postgres /pg/bin/supa-kick'       # kick supabase _analytics lag per minute: https://github.com/pgsty/pigsty/issues/581

    #----------------------------------------------#
    # Supabase
    #----------------------------------------------#
    # ./docker.yml
    # ./app.yml

    # the supabase stateless containers (default username & password: supabase/pigsty)
    supabase:
      hosts:
        10.10.10.10: {}
      vars:
        docker_enabled: true                              # enable docker on this group
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: supabase                                     # specify app name (supa) to be installed (in the apps)
        apps:                                             # define all applications
          supabase:                                       # the definition of supabase app
            conf:                                         # override /opt/supabase/.env

              # IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
              # https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
              JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
              ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
              SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
              PG_META_CRYPTO_KEY: your-encryption-key-32-chars-min

              DASHBOARD_USERNAME: supabase
              DASHBOARD_PASSWORD: pigsty

              # 32~64 random characters string for logflare
              LOGFLARE_PUBLIC_ACCESS_TOKEN: 1234567890abcdef1234567890abcdef
              LOGFLARE_PRIVATE_ACCESS_TOKEN: fedcba0987654321fedcba0987654321

              # postgres connection string (use the correct ip and port)
              POSTGRES_HOST: 10.10.10.10      # point to the local postgres node
              POSTGRES_PORT: 5436             # access via the 'default' service, which always route to the primary postgres
              POSTGRES_DB: postgres           # the supabase underlying database
              POSTGRES_PASSWORD: DBUser.Supa  # password for supabase_admin and multiple supabase users

              # expose supabase via domain name
              SITE_URL: https://supa.pigsty                # <------- Change This to your external domain name
              API_EXTERNAL_URL: https://supa.pigsty        # <------- Otherwise the storage api may not work!
              SUPABASE_PUBLIC_URL: https://supa.pigsty     # <------- DO NOT FORGET TO PUT IT IN infra_portal!

              # if using s3/minio as file storage
              S3_BUCKET: data
              S3_ENDPOINT: https://sss.pigsty:9000
              S3_ACCESS_KEY: s3user_data
              S3_SECRET_KEY: S3User.Data
              S3_FORCE_PATH_STYLE: true
              S3_PROTOCOL: https
              S3_REGION: stub
              MINIO_DOMAIN_IP: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

              # if using SMTP (optional)
              #SMTP_ADMIN_EMAIL: [email protected]
              #SMTP_HOST: supabase-mail
              #SMTP_PORT: 2500
              #SMTP_USER: fake_mail_user
              #SMTP_PASS: fake_mail_password
              #SMTP_SENDER_NAME: fake_sender
              #ENABLE_ANONYMOUS_USERS: false


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra
    #----------------------------------------------#
    version: v4.0.0                       # pigsty version string
    admin_ip: 10.10.10.10                 # admin node ip address
    region: default                       # upstream mirror region: default|china|europe
    proxy_env:                            # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    certbot_sign: false                   # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]         # replace your email address to receive expiration notice
    infra_portal:                         # infra services exposed via portal
      home      : { domain: i.pigsty }    # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      #minio     : { domain: m.pigsty   ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      # Nginx / Domain / HTTPS : https://doc.pgsty.com/admin/portal
      supa :                              # nginx server config for supabase
        domain: supa.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8000"      # supabase service endpoint: IP:PORT
        websocket: true                   # add websocket support
        certbot: supa.pigsty              # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty supa.pigsty
    node_repo_modules: node,pgsql,infra   # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                        # default postgres version
    pg_conf: oltp.yml                     # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                   # prevent purging running postgres instance?
    pg_default_schemas: [ monitor, extensions ] # add new schema: exxtensions
    pg_default_extensions:                # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor     }
      - { name: pgstattuple        ,schema: monitor     }
      - { name: pg_buffercache     ,schema: monitor     }
      - { name: pageinspect        ,schema: monitor     }
      - { name: pg_prewarm         ,schema: monitor     }
      - { name: pg_visibility      ,schema: monitor     }
      - { name: pg_freespacemap    ,schema: monitor     }
      - { name: pg_wait_sampling   ,schema: monitor     }
      # move default extensions to `extensions` schema for supabase
      - { name: postgres_fdw       ,schema: extensions  }
      - { name: file_fdw           ,schema: extensions  }
      - { name: btree_gist         ,schema: extensions  }
      - { name: btree_gin          ,schema: extensions  }
      - { name: pg_trgm            ,schema: extensions  }
      - { name: intagg             ,schema: extensions  }
      - { name: intarray           ,schema: extensions  }
      - { name: pg_repack          ,schema: extensions  }

    #----------------------------------------------#
    # BACKUP : https://doc.pgsty.com/pgsql/backup
    #----------------------------------------------#
    minio_endpoint: https://sss.pigsty:9000 # explicit overwrite minio endpoint with haproxy port
    pgbackrest_method: minio              # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_repo:                      # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                              # default pgbackrest repo with local posix fs
        path: /pg/backup                  # local backup directory, `/pg/backup` by default
        retention_full_type: count        # retention full backups by count
        retention_full: 2                 # keep 2, at most 3 full backups when using local fs repo
      minio:                              # optional minio repo for pgbackrest
        type: s3                          # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty           # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1              # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql                  # minio bucket name, `pgsql` by default
        s3_key: pgbackrest                # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup      # minio user secret key for pgbackrest <------------------ HEY, DID YOU CHANGE THIS?
        s3_uri_style: path                # use path style uri for minio rather than host style
        path: /pgbackrest                 # minio backup path, default is `/pgbackrest`
        storage_port: 9000                # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                          # Enable block incremental backup
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'  <----- HEY, DID YOU CHANGE THIS?
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days
      s3:                                 # you can use cloud object storage as backup repo
        type: s3                          # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The supabase template provides a complete self-hosted Supabase solution, allowing you to run this open-source Firebase alternative on your own infrastructure.

Architecture:

  • PostgreSQL: Production-grade Pigsty-managed PostgreSQL (with HA support)
  • Docker Containers: Supabase stateless services (Auth, Storage, Realtime, Edge Functions, etc.)
  • MinIO: S3-compatible object storage for file storage and PostgreSQL backup
  • Nginx: Reverse proxy and HTTPS termination

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Supabase’s built-in database container
  • Supports PostgreSQL high availability (can be expanded to three-node cluster)
  • Installs all Supabase-required extensions (pg_net, pgjwt, pg_graphql, vector, etc.)
  • Integrated MinIO object storage for file uploads and backups
  • HTTPS support with Let’s Encrypt automatic certificates

Deployment Steps:

curl https://repo.pigsty.io/get | bash   # Download Pigsty
./configure -c supabase                   # Use supabase config template
./deploy.yml                              # Install Pigsty, PostgreSQL, MinIO
./docker.yml                              # Install Docker
./app.yml                                 # Start Supabase containers

Access:

# Supabase Studio
https://supa.pigsty   (username: supabase, password: pigsty)

# Direct PostgreSQL connection
psql postgres://supabase_admin:[email protected]:5432/postgres

Use Cases:

  • Need to self-host BaaS (Backend as a Service) platform
  • Want full control over data and infrastructure
  • Need enterprise-grade PostgreSQL HA and backups
  • Compliance or cost concerns with Supabase cloud service

Notes:

  • Must change JWT_SECRET: Use at least 32-character random string, and regenerate ANON_KEY and SERVICE_ROLE_KEY
  • Configure proper domain names (SITE_URL, API_EXTERNAL_URL)
  • Production environments should enable HTTPS (can use certbot for auto certificates)
  • Docker network needs access to PostgreSQL (172.17.0.0/16 HBA rule configured)

17 - HA Templates

18 - ha/simu

20-node production environment simulation for large-scale deployment testing

The ha/simu configuration template is a 20-node production environment simulation, requiring a powerful host machine to run.


Overview

  • Config Name: ha/simu
  • Node Count: 20 nodes, pigsty/vagrant/spec/simu.rb
  • Description: 20-node production environment simulation, requires powerful host machine
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64

Usage:

./configure -c ha/simu [-i <primary_ip>]

Content

Source: pigsty/conf/ha/simu.yml

---
#==============================================================#
# File      :   simu.yml
# Desc      :   Pigsty Simubox: a 20 node prod simulation env
# Ctime     :   2023-07-20
# Mtime     :   2025-12-23
# Docs      :   https://doc.pgsty.com/config
# License   :   AGPLv3 @ https://doc.pgsty.com/about/license
# Copyright :   2018-2025  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:

  children:

    #==========================================================#
    # infra: 3 nodes
    #==========================================================#
    # ./infra.yml -l infra
    # ./docker.yml -l infra (optional)
    infra:
      hosts:
        10.10.10.10: {}
        10.10.10.11: { repo_enabled: false }
        10.10.10.12: { repo_enabled: false }
      vars:
        docker_enabled: true
        node_conf: oltp         # use oltp template for infra nodes
        pg_conf: oltp.yml       # use oltp template for infra pgsql
        pg_exporters:           # bin/pgmon-add pg-meta2/pg-src2/pg-dst2
          20001: {pg_cluster: pg-meta2   ,pg_seq: 1 ,pg_host: 10.10.10.10, pg_databases: [{ name: meta }]}
          20002: {pg_cluster: pg-meta2   ,pg_seq: 2 ,pg_host: 10.10.10.11, pg_databases: [{ name: meta }]}
          20003: {pg_cluster: pg-meta2   ,pg_seq: 3 ,pg_host: 10.10.10.12, pg_databases: [{ name: meta }]}

          20004: {pg_cluster: pg-src2    ,pg_seq: 1 ,pg_host: 10.10.10.31, pg_databases: [{ name: src }]}
          20005: {pg_cluster: pg-src2    ,pg_seq: 2 ,pg_host: 10.10.10.32, pg_databases: [{ name: src }]}
          20006: {pg_cluster: pg-src2    ,pg_seq: 3 ,pg_host: 10.10.10.33, pg_databases: [{ name: src }]}

          20007: {pg_cluster: pg-dst2    ,pg_seq: 1 ,pg_host: 10.10.10.41, pg_databases: [{ name: dst }]}
          20008: {pg_cluster: pg-dst2    ,pg_seq: 2 ,pg_host: 10.10.10.42, pg_databases: [{ name: dst }]}
          20009: {pg_cluster: pg-dst2    ,pg_seq: 3 ,pg_host: 10.10.10.43, pg_databases: [{ name: dst }]}


    #==========================================================#
    # nodes: 23 nodes
    #==========================================================#
    # ./node.yml
    nodes:
      hosts:
        10.10.10.10 : { nodename: meta1  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 1 ,pg_role: primary, infra_seq: 1 }
        10.10.10.11 : { nodename: meta2  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 2 ,pg_role: replica, infra_seq: 2 }
        10.10.10.12 : { nodename: meta3  ,node_cluster: meta   ,pg_cluster: pg-meta  ,pg_seq: 3 ,pg_role: replica, infra_seq: 3 }
        10.10.10.18 : { nodename: proxy1 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: master }
        10.10.10.19 : { nodename: proxy2 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: backup }
        10.10.10.21 : { nodename: minio1 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 1 }
        10.10.10.22 : { nodename: minio2 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 2 }
        10.10.10.23 : { nodename: minio3 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 3 }
        10.10.10.24 : { nodename: minio4 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 4 }
        10.10.10.25 : { nodename: etcd1  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 1 }
        10.10.10.26 : { nodename: etcd2  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 2 }
        10.10.10.27 : { nodename: etcd3  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 3 }
        10.10.10.28 : { nodename: etcd4  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 4 }
        10.10.10.29 : { nodename: etcd5  ,node_cluster: etcd   ,etcd_cluster: etcd ,etcd_seq: 5 }
        10.10.10.31 : { nodename: pg-src-1 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.32 : { nodename: pg-src-2 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.33 : { nodename: pg-src-3 ,node_cluster: pg-src ,node_id_from_pg: true }
        10.10.10.41 : { nodename: pg-dst-1 ,node_cluster: pg-dst ,node_id_from_pg: true }
        10.10.10.42 : { nodename: pg-dst-2 ,node_cluster: pg-dst ,node_id_from_pg: true }
        10.10.10.43 : { nodename: pg-dst-3 ,node_cluster: pg-dst ,node_id_from_pg: true }

    #==========================================================#
    # etcd: 5 nodes dedicated etcd cluster
    #==========================================================#
    # ./etcd.yml -l etcd;
    etcd:
      hosts:
        10.10.10.25: {}
        10.10.10.26: {}
        10.10.10.27: {}
        10.10.10.28: {}
        10.10.10.29: {}
      vars: {}

    #==========================================================#
    # minio: 4 nodes dedicated minio cluster
    #==========================================================#
    # ./minio.yml -l minio;
    minio:
      hosts:
        10.10.10.21: {}
        10.10.10.22: {}
        10.10.10.23: {}
        10.10.10.24: {}
      vars:
        minio_data: '/data{1...4}' # 4 node x 4 disk
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #==========================================================#
    # proxy: 2 nodes used as dedicated haproxy server
    #==========================================================#
    # ./node.yml -l proxy
    proxy:
      hosts:
        10.10.10.18: {}
        10.10.10.19: {}
      vars:
        vip_enabled: true
        haproxy_services:      # expose minio service : sss.pigsty:9000
          - name: minio        # [REQUIRED] service name, unique
            port: 9000         # [REQUIRED] service port, unique
            balance: leastconn # Use leastconn algorithm and minio health check
            options: [ "option httpchk", "option http-keep-alive", "http-check send meth OPTIONS uri /minio/health/live", "http-check expect status 200" ]
            servers:           # reload service with ./node.yml -t haproxy_config,haproxy_reload
              - { name: minio-1 ,ip: 10.10.10.21 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.22 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.23 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.24 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

    #==========================================================#
    # pg-meta: reuse infra node as meta cmdb
    #==========================================================#
    # ./pgsql.yml -l pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1 , pg_role: primary }
        10.10.10.11: { pg_seq: 2 , pg_role: replica }
        10.10.10.12: { pg_seq: 3 , pg_role: replica }
      vars:
        pg_cluster: pg-meta
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector}]}
          - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
          - { name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #==========================================================#
    # pg-src: dedicate 3 node source cluster
    #==========================================================#
    # ./pgsql.yml -l pg-src
    pg-src:
      hosts:
        10.10.10.31: { pg_seq: 1 ,pg_role: primary }
        10.10.10.32: { pg_seq: 2 ,pg_role: replica }
        10.10.10.33: { pg_seq: 3 ,pg_role: replica }
      vars:
        pg_cluster: pg-src
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: src }]


    #==========================================================#
    # pg-dst: dedicate 3 node destination cluster
    #==========================================================#
    # ./pgsql.yml -l pg-dst
    pg-dst:
      hosts:
        10.10.10.41: { pg_seq: 1 ,pg_role: primary }
        10.10.10.42: { pg_seq: 2 ,pg_role: replica }
        10.10.10.43: { pg_seq: 3 ,pg_role: replica }
      vars:
        pg_cluster: pg-dst
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.4/24
        pg_vip_interface: eth1
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: dst } ]


    #==========================================================#
    # redis-meta: reuse the 5 etcd nodes as redis sentinel
    #==========================================================#
    # ./redis.yml -l redis-meta
    redis-meta:
      hosts:
        10.10.10.25: { redis_node: 1 , redis_instances: { 26379: {} } }
        10.10.10.26: { redis_node: 2 , redis_instances: { 26379: {} } }
        10.10.10.27: { redis_node: 3 , redis_instances: { 26379: {} } }
        10.10.10.28: { redis_node: 4 , redis_instances: { 26379: {} } }
        10.10.10.29: { redis_node: 5 , redis_instances: { 26379: {} } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 256MB
        redis_sentinel_monitor:  # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-src, host: 10.10.10.31, port: 6379 ,password: redis.src, quorum: 1 }
          - { name: redis-dst, host: 10.10.10.41, port: 6379 ,password: redis.dst, quorum: 1 }

    #==========================================================#
    # redis-src: reuse pg-src 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-src
    redis-src:
      hosts:
        10.10.10.31: { redis_node: 1 , redis_instances: {6379: {  } }}
        10.10.10.32: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.32 6379' } }}
        10.10.10.33: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.33 6379' } }}
      vars:
        redis_cluster: redis-src
        redis_password: 'redis.src'
        redis_max_memory: 64MB

    #==========================================================#
    # redis-dst: reuse pg-dst 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-dst
    redis-dst:
      hosts:
        10.10.10.41: { redis_node: 1 , redis_instances: {6379: {  }                               }}
        10.10.10.42: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
        10.10.10.43: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
      vars:
        redis_cluster: redis-dst
        redis_password: 'redis.dst'
        redis_max_memory: 64MB

    #==========================================================#
    # pg-tmp: reuse proxy nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-tmp
    pg-tmp:
      hosts:
        10.10.10.18: { pg_seq: 1 ,pg_role: primary }
        10.10.10.19: { pg_seq: 2 ,pg_role: replica }
      vars:
        pg_cluster: pg-tmp
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: tmp } ]

    #==========================================================#
    # pg-etcd: reuse etcd nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-etcd
    pg-etcd:
      hosts:
        10.10.10.25: { pg_seq: 1 ,pg_role: primary }
        10.10.10.26: { pg_seq: 2 ,pg_role: replica }
        10.10.10.27: { pg_seq: 3 ,pg_role: replica }
        10.10.10.28: { pg_seq: 4 ,pg_role: replica }
        10.10.10.29: { pg_seq: 5 ,pg_role: offline }
      vars:
        pg_cluster: pg-etcd
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: etcd } ]

    #==========================================================#
    # pg-minio: reuse minio nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-minio
    pg-minio:
      hosts:
        10.10.10.21: { pg_seq: 1 ,pg_role: primary }
        10.10.10.22: { pg_seq: 2 ,pg_role: replica }
        10.10.10.23: { pg_seq: 3 ,pg_role: replica }
        10.10.10.24: { pg_seq: 4 ,pg_role: replica }
      vars:
        pg_cluster: pg-minio
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: minio } ]

    #==========================================================#
    # ferret: reuse pg-src as mongo (ferretdb)
    #==========================================================#
    # ./mongo.yml -l ferret
    ferret:
      hosts:
        10.10.10.31: { mongo_seq: 1 }
        10.10.10.32: { mongo_seq: 2 }
        10.10.10.33: { mongo_seq: 3 }
      vars:
        mongo_cluster: ferret
        mongo_pgurl: 'postgres://test:[email protected]:5432/src'


  #============================================================#
  # Global Variables
  #============================================================#
  vars:

    #==========================================================#
    # INFRA
    #==========================================================#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      minio        : { domain: m.pigsty    ,endpoint: "10.10.10.21:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888"  , websocket: true }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }

    #==========================================================#
    # NODE
    #==========================================================#
    node_id_from_pg: false            # use nodename rather than pg identity as hostname
    node_conf: tiny                   # use small node template
    node_timezone: Asia/Hong_Kong     # use Asia/Hong_Kong Timezone
    node_dns_servers:                 # DNS servers in /etc/resolv.conf
      - 10.10.10.10
      - 10.10.10.11
    node_etc_hosts:
      - 10.10.10.10 i.pigsty
      - 10.10.10.20 sss.pigsty        # point minio service domain to the L2 VIP of proxy cluster
    node_ntp_servers:                 # NTP servers in /etc/chrony.conf
      - pool cn.pool.ntp.org iburst
      - pool 10.10.10.10 iburst
    node_admin_ssh_exchange: false    # exchange admin ssh key among node cluster

    #==========================================================#
    # PGSQL
    #==========================================================#
    pg_conf: tiny.yml
    pgbackrest_method: minio          # USE THE HA MINIO THROUGH A LOAD BALANCER
    pg_dbsu_ssh_exchange: false       # do not exchange dbsu ssh key among pgsql cluster
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `//pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #==========================================================#
    # Repo
    #==========================================================#
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl
    ]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/simu template is a large-scale production environment simulation for testing and validating complex scenarios.

Architecture:

  • 2-node HA INFRA (monitoring/alerting/Nginx/DNS)
  • 5-node HA ETCD and MinIO (multi-disk)
  • 2-node Proxy (HAProxy + Keepalived VIP)
  • Multiple PostgreSQL clusters:
    • pg-meta: 2-node HA
    • pg-v12~v17: Single-node multi-version testing
    • pg-pitr: Single-node PITR testing
    • pg-test: 4-node HA
    • pg-src/pg-dst: 3+2 node replication testing
    • pg-citus: 10-node distributed cluster
  • Multiple Redis modes: primary-replica, sentinel, cluster

Use Cases:

  • Large-scale deployment testing and validation
  • High availability failover drills
  • Performance benchmarking
  • New feature preview and evaluation

Notes:

  • Requires powerful host machine (64GB+ RAM recommended)
  • Uses Vagrant virtual machines for simulation

19 - ha/full

Four-node complete feature demonstration environment with two PostgreSQL clusters, MinIO, Redis, etc.

The ha/full configuration template is Pigsty’s recommended sandbox demonstration environment, deploying two PostgreSQL clusters across four nodes for testing and demonstrating various Pigsty capabilities.

Most Pigsty tutorials and examples are based on this template’s sandbox environment.


Overview

  • Config Name: ha/full
  • Node Count: Four nodes
  • Description: Four-node complete feature demonstration environment with two PostgreSQL clusters, MinIO, Redis, etc.
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/trio, ha/safe, demo/demo

Usage:

./configure -c ha/full [-i <primary_ip>]

After configuration, modify the IP addresses of the other three nodes.


Content

Source: pigsty/conf/ha/full.yml

---
#==============================================================#
# File      :   full.yml
# Desc      :   Pigsty Local Sandbox 4-node Demo Config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    # infra: monitor, alert, repo, etc..
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        docker_enabled: true      # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    # etcd cluster for HA postgres DCS
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd

    # minio (single node, used as backup repo)
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    # postgres cluster: pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta     ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    # pgsql 3 node ha cluster: pg-test
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/full template is Pigsty’s complete feature demonstration configuration, showcasing the collaboration of various components.

Components Overview:

ComponentNode DistributionDescription
INFRANode 1Monitoring/Alerting/Nginx/DNS
ETCDNode 1DCS Service
MinIONode 1S3-compatible Storage
pg-metaNode 1Single-node PostgreSQL
pg-testNodes 2-4Three-node HA PostgreSQL
redis-msNode 1Redis Primary-Replica Mode
redis-metaNode 2Redis Sentinel Mode
redis-testNodes 3-4Redis Native Cluster Mode

Use Cases:

  • Pigsty feature demonstration and learning
  • Development testing environments
  • Evaluating HA architecture
  • Comparing different Redis modes

Differences from ha/trio:

  • Added second PostgreSQL cluster (pg-test)
  • Added three Redis cluster mode examples
  • Infrastructure uses single node (instead of three nodes)

Notes:

  • This template is mainly for demonstration and testing; for production, refer to ha/trio or ha/safe
  • MinIO backup enabled by default; comment out related config if not needed

20 - ha/safe

Security-hardened HA configuration template with high-standard security best practices

The ha/safe configuration template is based on the ha/trio template, providing a security-hardened configuration with high-standard security best practices.


Overview

  • Config Name: ha/safe
  • Node Count: Three nodes (optional delayed replica)
  • Description: Security-hardened HA configuration with high-standard security best practices
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64 (some security extensions unavailable on ARM64)
  • Related: ha/trio, ha/full

Usage:

./configure -c ha/safe [-i <primary_ip>]

Security Hardening Measures

The ha/safe template implements the following security hardening:

  • Mandatory SSL Encryption: SSL enabled for both PostgreSQL and PgBouncer
  • Strong Password Policy: passwordcheck extension enforces password complexity
  • User Expiration: All users set to 20-year expiration
  • Minimal Connection Scope: Limit PostgreSQL/Patroni/PgBouncer listen addresses
  • Strict HBA Rules: Mandatory SSL authentication, admin requires certificate
  • Audit Logs: Record connection and disconnection events
  • Delayed Replica: Optional 1-hour delayed replica for recovery from mistakes
  • Critical Template: Uses crit.yml tuning template for zero data loss

Content

Source: pigsty/conf/ha/safe.yml

---
#==============================================================#
# File      :   safe.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#===== SECURITY ENHANCEMENT CONFIG TEMPLATE WITH 3 NODES ======#
#   * 3 infra nodes, 3 etcd nodes, single minio node
#   * 3-instance pgsql cluster with an extra delayed instance
#   * crit.yml templates, no data loss, checksum enforced
#   * enforce ssl on postgres & pgbouncer, use postgres by default
#   * enforce an expiration date for all users (20 years by default)
#   * enforce strong password policy with passwordcheck extension
#   * enforce changing default password for all users
#   * log connections and disconnections
#   * restrict listen ip address for postgres/patroni/pgbouncer


all:
  children:

    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars: { patroni_watchdog_mode: off }

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_conf: crit.yml
        pg_users:
          - { name: dbuser_meta , password: Pleas3-ChangeThisPwd ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: Make.3ure-Compl1ance  ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_services:
          - { name: standby , ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
        pg_listen: '${ip},${vip},${lo}'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # OPTIONAL delayed cluster for pg-meta
    #pg-meta-delay: # delayed instance for pg-meta (1 hour ago)
    #  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
    #  vars: { pg_cluster: pg-meta-delay }


  ####################################################################
  #                          Parameters                              #
  ####################################################################
  vars: # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    patroni_ssl_enabled: true         # secure patroni RestAPI communications with SSL?
    pgbouncer_sslmode: require        # pgbouncer client ssl mode: disable|allow|prefer|require|verify-ca|verify-full, disable by default
    pg_default_service_dest: postgres # default service destination to postgres instead of pgbouncer
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    minio_users: # and configure `pgbackrest_repo` & `minio_users` accordingly
      - { access_key: dba , secret_key: S3User.DBA.Strong.Password, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: Min10.bAckup ,policy: readwrite }
    pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local: # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio: # optional minio repo for pgbackrest
        s3_key: pgbackrest            # <-------- CHANGE THIS, SAME AS `minio_users` access_key
        s3_key_secret: Min10.bAckup   # <-------- CHANGE THIS, SAME AS `minio_users` secret_key
        cipher_pass: 'pgBR.${pg_cluster}'  # <-------- CHANGE THIS, you can use cluster name as part of password
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #----------------------------------#
    # Access Control
    #----------------------------------#
    # add passwordcheck extension to enforce strong password policy
    pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
    pg_extensions:
      - passwordcheck, supautils, pgsodium, pg_vault, pg_session_jwt, anonymizer, pgsmcrypto, pgauditlogtofile, pgaudit #, pgaudit17, pgaudit16, pgaudit15, pgaudit14
      - pg_auth_mon, credcheck, pgcryptokey, pg_jobmon, logerrors, login_hook, set_user, pgextwlist, pg_auditor, sslutils, noset #pg_tde #pg_snakeoil
    pg_default_roles: # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [ dbrole_readonly ]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [ pg_monitor, dbrole_readwrite ]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [ pg_monitor, dbrole_readonly ]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [ dbrole_admin ]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [ pg_monitor ] ,expire_in: 7300 ,pgbouncer: true ,parameters: { log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules: # postgres host-based auth rules by default, order by `order`
      - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'   ,order: 100}
      - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident'  ,order: 150}
      - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost' ,order: 200}
      - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet'  ,order: 250}
      - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet'  ,order: 300}
      - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password'  ,order: 350}
      - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password' ,order: 400}
      - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'    ,order: 450}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'    ,order: 500}
      - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket' ,order: 550}
      - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'      ,order: 600}
      - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet' ,order: 650}
    pgb_default_hba_rules: # pgbouncer host-based authentication rules, order by `order`
      - { user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident' ,order: 100}
      - { user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd'  ,order: 150}
      - { user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd'  ,order: 200}
      - { user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr'  ,order: 250}
      - { user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'    ,order: 300}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'    ,order: 350}
      - { user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd'  ,order: 400}

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    #node_selinux_mode: enforcing     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default

    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: You.Have2Use-A_VeryStrongPassword
    grafana_view_password: DBUser.Viewer
    #pg_admin_username: dbuser_dba
    pg_admin_password: PessWorb.Should8eStrong-eNough
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: MekeSuerYour.PassWordI5secured
    #pg_replication_username: replicator
    pg_replication_password: doNotUseThis-PasswordFor.AnythingElse
    #patroni_username: postgres
    patroni_password: don.t-forget-to-change-thEs3-password
    #haproxy_admin_username: admin
    haproxy_admin_password: GneratePasswordWith-pwgen-s-16-1
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/safe template is Pigsty’s security-hardened configuration, designed for production environments with high security requirements.

Security Features Summary:

Security MeasureDescription
SSL EncryptionFull-chain SSL for PostgreSQL/PgBouncer/Patroni
Strong Passwordpasswordcheck extension enforces complexity
User ExpirationAll users expire in 20 years (expire_in: 7300)
Strict HBAAdmin remote access requires certificate
Encrypted BackupMinIO backup with AES-256-CBC encryption
Audit Logspgaudit extension for SQL audit logging
Delayed Replica1-hour delayed replica for mistake recovery

Use Cases:

  • Finance, healthcare, government sectors with high security requirements
  • Environments needing compliance audit requirements
  • Critical business with extremely high data security demands

Notes:

  • Some security extensions unavailable on ARM64 architecture, enable appropriately
  • All default passwords must be changed to strong passwords
  • Recommend using with regular security audits

21 - ha/trio

Three-node standard HA configuration, tolerates any single server failure

Three nodes is the minimum scale for achieving true high availability. The ha/trio template uses a three-node standard HA architecture, with INFRA, ETCD, and PGSQL all deployed across three nodes, tolerating any single server failure.


Overview

  • Config Name: ha/trio
  • Node Count: Three nodes
  • Description: Three-node standard HA architecture, tolerates any single server failure
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/dual, ha/full, ha/safe

Usage:

./configure -c ha/trio [-i <primary_ip>]

After configuration, modify placeholder IPs 10.10.10.11 and 10.10.10.12 to actual node IP addresses.


Content

Source: pigsty/conf/ha/trio.yml

---
#==============================================================#
# File      :   trio.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# 3 infra node, 3 etcd node, 3 pgsql node, and 1 minio node

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars:
        patroni_watchdog_mode: off # do not fencing infra

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: DBUser.View ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      #minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/trio template is Pigsty’s standard HA configuration, providing true automatic failover capability.

Architecture:

  • Three-node INFRA: Distributed deployment of Prometheus/Grafana/Nginx
  • Three-node ETCD: DCS majority election, tolerates single-point failure
  • Three-node PostgreSQL: One primary, two replicas, automatic failover
  • Single-node MinIO: Can be expanded to multi-node as needed

HA Guarantees:

  • Three-node ETCD tolerates one node failure, maintains majority
  • PostgreSQL primary failure triggers automatic Patroni election for new primary
  • L2 VIP follows primary, applications don’t need to modify connection config

Use Cases:

  • Minimum HA deployment for production environments
  • Critical business requiring automatic failover
  • Foundation architecture for larger scale deployments

Extension Suggestions:

  • For stronger data security, refer to ha/safe template
  • For more demo features, refer to ha/full template
  • Production environments should enable pgbackrest_method: minio for remote backup

22 - ha/dual

Two-node configuration, limited HA deployment tolerating specific server failure

The ha/dual template uses two-node deployment, implementing a “semi-HA” architecture with one primary and one standby. If you only have two servers, this is a pragmatic choice.


Overview

  • Config Name: ha/dual
  • Node Count: Two nodes
  • Description: Two-node limited HA deployment, tolerates specific server failure
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/trio, slim

Usage:

./configure -c ha/dual [-i <primary_ip>]

After configuration, modify placeholder IP 10.10.10.11 to actual standby node IP address.


Content

Source: pigsty/conf/ha/dual.yml

---
#==============================================================#
# File      :   dual.yml
# Desc      :   Pigsty deployment example for two nodes
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


# It is recommended to use at least three nodes in production deployment.
# But sometimes, there are only two nodes available, that's dual.yml for
#
# In this setup, we have two nodes, .10 (admin_node) and .11 (pgsql_priamry):
#
# If .11 is down, .10 will take over since the dcs:etcd is still alive
# If .10 is down, .11 (pgsql primary) will still be functioning as a primary if:
#   - Only dcs:etcd is down
#   - Only pgsql is down
# if both etcd & pgsql are down (e.g. node down), the primary will still demote itself.


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, optional backup repo for pgbackrest
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres cluster 'pg-meta' with single primary instance
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: replica }
        10.10.10.11: { pg_seq: 2, pg_role: primary }  # <----- use this as primary by default
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

  vars:                               # global parameters
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    infra_portal:                     # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/dual template is Pigsty’s two-node limited HA configuration, designed for scenarios with only two servers.

Architecture:

  • Node A (10.10.10.10): Admin node, runs Infra + etcd + PostgreSQL replica
  • Node B (10.10.10.11): Data node, runs PostgreSQL primary only

Failure Scenario Analysis:

Failed NodeImpactAuto Recovery
Node B downPrimary switches to Node AAuto
Node A etcd downPrimary continues running (no DCS)Manual
Node A pgsql downPrimary continues runningManual
Node A complete failurePrimary degrades to standaloneManual

Use Cases:

  • Budget-limited environments with only two servers
  • Acceptable that some failure scenarios need manual intervention
  • Transitional solution before upgrading to three-node HA

Notes:

  • True HA requires at least three nodes (DCS needs majority)
  • Recommend upgrading to three-node architecture as soon as possible
  • L2 VIP requires network environment support (same broadcast domain)

23 - App Templates

24 - app/odoo

Deploy Odoo open-source ERP system using Pigsty-managed PostgreSQL

The app/odoo configuration template provides a reference configuration for self-hosting Odoo open-source ERP system, using Pigsty-managed PostgreSQL as the database.

For more details, see Odoo Deployment Tutorial


Overview

  • Config Name: app/odoo
  • Node Count: Single node
  • Description: Deploy Odoo ERP using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/odoo [-i <primary_ip>]

Content

Source: pigsty/conf/app/odoo.yml

---
#==============================================================#
# File      :   odoo.yml
# Desc      :   pigsty config for running 1-node odoo app
# Ctime     :   2025-01-11
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/odoo
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/odoo   # Use this odoo config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install odoo

all:
  children:

    # the odoo application (default username & password: admin/admin)
    odoo:
      hosts: { 10.10.10.10: {} }
      vars:
        app: odoo   # specify app name to be installed (in the apps)
        apps:       # define all applications
          odoo:     # app name, should have corresponding ~/pigsty/app/odoo folder
            file:   # optional directory to be created
              - { path: /data/odoo         ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/webdata ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/addons  ,state: directory, owner: 100, group: 101 }
            conf:   # override /opt/<app>/.env config file
              PG_HOST: 10.10.10.10            # postgres host
              PG_PORT: 5432                   # postgres port
              PG_USERNAME: odoo               # postgres user
              PG_PASSWORD: DBUser.Odoo        # postgres password
              ODOO_PORT: 8069                 # odoo app port
              ODOO_DATA: /data/odoo/webdata   # odoo webdata
              ODOO_ADDONS: /data/odoo/addons  # odoo plugins
              ODOO_DBNAME: odoo               # odoo database name
              ODOO_VERSION: 19.0              # odoo image version

    # the odoo database
    pg-odoo:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-odoo
        pg_users:
          - { name: odoo    ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for odoo service }
          - { name: odoo_ro ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for odoo service  }
          - { name: odoo_rw ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for odoo service }
        pg_databases:
          - { name: odoo ,owner: odoo ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - { user: all ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home  : { domain: i.pigsty }
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      odoo:                           # nginx server config for odoo
        domain: odoo.pigsty           # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8069"  # odoo service endpoint: IP:PORT
        websocket: true               # add websocket support
        certbot: odoo.pigsty          # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/odoo template provides a one-click deployment solution for Odoo open-source ERP system.

What is Odoo:

  • World’s most popular open-source ERP system
  • Covers CRM, Sales, Purchasing, Inventory, Finance, HR, and other enterprise management modules
  • Supports thousands of community and official application extensions
  • Provides web interface and mobile support

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Odoo’s built-in database
  • Supports Odoo 19.0 latest version
  • Data persisted to independent directory /data/odoo
  • Supports custom plugin directory /data/odoo/addons

Access:

# Odoo Web interface
http://odoo.pigsty:8069

# Default admin account
Username: admin
Password: admin (set on first login)

Use Cases:

  • SMB ERP systems
  • Alternative to SAP, Oracle ERP and other commercial solutions
  • Enterprise applications requiring customized business processes

Notes:

  • Odoo container runs as uid=100, gid=101, data directory needs correct permissions
  • First access requires creating database and setting admin password
  • Production environments should enable HTTPS
  • Custom modules can be installed via /data/odoo/addons

25 - app/dify

Deploy Dify AI application development platform using Pigsty-managed PostgreSQL

The app/dify configuration template provides a reference configuration for self-hosting Dify AI application development platform, using Pigsty-managed PostgreSQL and pgvector as vector storage.

For more details, see Dify Deployment Tutorial


Overview

  • Config Name: app/dify
  • Node Count: Single node
  • Description: Deploy Dify using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/dify [-i <primary_ip>]

Content

Source: pigsty/conf/app/dify.yml

---
#==============================================================#
# File      :   dify.yml
# Desc      :   pigsty config for running 1-node dify app
# Ctime     :   2025-02-24
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#
# Last Verified Dify Version: v1.8.1 on 2025-0908
# tutorial: https://doc.pgsty.com/app/dify
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/dify   # use this dify config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install dify with docker-compose
#
# To replace domain name:
#   sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml


all:
  children:

    # the dify application
    dify:
      hosts: { 10.10.10.10: {} }
      vars:
        app: dify   # specify app name to be installed (in the apps)
        apps:       # define all applications
          dify:     # app name, should have corresponding ~/pigsty/app/dify folder
            file:   # data directory to be created
              - { path: /data/dify ,state: directory ,mode: 0755 }
            conf:   # override /opt/dify/.env config file

              # change domain, mirror, proxy, secret key
              NGINX_SERVER_NAME: dify.pigsty
              # A secret key for signing and encryption, gen with `openssl rand -base64 42` (CHANGE PASSWORD!)
              SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
              # expose DIFY nginx service with port 5001 by default
              DIFY_PORT: 5001
              # where to store dify files? the default is ./volume, we'll use another volume created above
              DIFY_DATA: /data/dify

              # proxy and mirror settings
              #PIP_MIRROR_URL: https://pypi.tuna.tsinghua.edu.cn/simple
              #SANDBOX_HTTP_PROXY: http://10.10.10.10:12345
              #SANDBOX_HTTPS_PROXY: http://10.10.10.10:12345

              # database credentials
              DB_USERNAME: dify
              DB_PASSWORD: difyai123456
              DB_HOST: 10.10.10.10
              DB_PORT: 5432
              DB_DATABASE: dify
              VECTOR_STORE: pgvector
              PGVECTOR_HOST: 10.10.10.10
              PGVECTOR_PORT: 5432
              PGVECTOR_USER: dify
              PGVECTOR_PASSWORD: difyai123456
              PGVECTOR_DATABASE: dify
              PGVECTOR_MIN_CONNECTION: 2
              PGVECTOR_MAX_CONNECTION: 10

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dify ,password: difyai123456 ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: dify superuser }
        pg_databases:
          - { name: dify        ,owner: dify ,revokeconn: true ,comment: dify main database  }
          - { name: dify_plugin ,owner: dify ,revokeconn: true ,comment: dify plugin_daemon database }
        pg_hba_rules:
          - { user: dify ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow dify access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home   :  { domain: i.pigsty }
      #minio :  { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      dify:                            # nginx server config for dify
        domain: dify.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5001"   # dify service endpoint: IP:PORT
        websocket: true                # add websocket support
        certbot: dify.pigsty           # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/dify template provides a one-click deployment solution for Dify AI application development platform.

What is Dify:

  • Open-source LLM application development platform
  • Supports RAG, Agent, Workflow and other AI application modes
  • Provides visual Prompt orchestration and application building interface
  • Supports multiple LLM backends (OpenAI, Claude, local models, etc.)

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Dify’s built-in database
  • Uses pgvector as vector storage (replaces Weaviate/Qdrant)
  • Supports HTTPS and custom domain names
  • Data persisted to independent directory /data/dify

Access:

# Dify Web interface
http://dify.pigsty:5001

# Or via Nginx proxy
https://dify.pigsty

Use Cases:

  • Enterprise internal AI application development platform
  • RAG knowledge base Q&A systems
  • LLM-driven automated workflows
  • AI Agent development and deployment

Notes:

  • Must change SECRET_KEY, generate with openssl rand -base64 42
  • Configure LLM API keys (e.g., OpenAI API Key)
  • Docker network needs access to PostgreSQL (172.17.0.0/16 HBA rule configured)
  • Recommend configuring proxy to accelerate Python package downloads

26 - app/electric

Deploy Electric real-time sync service using Pigsty-managed PostgreSQL

The app/electric configuration template provides a reference configuration for deploying Electric SQL real-time sync service, enabling real-time data synchronization from PostgreSQL to clients.


Overview

  • Config Name: app/electric
  • Node Count: Single node
  • Description: Deploy Electric real-time sync using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/electric [-i <primary_ip>]

Content

Source: pigsty/conf/app/electric.yml

---
#==============================================================#
# File      :   electric.yml
# Desc      :   pigsty config for running 1-node electric app
# Ctime     :   2025-03-29
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/electric
# quick start: https://electric-sql.com/docs/quickstart
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap                 # prepare local repo & ansible
# ./configure -c app/electric # use this dify config template
# vi pigsty.yml               # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml                # install pigsty & pgsql & minio
# ./docker.yml                # install docker & docker-compose
# ./app.yml                   # install dify with docker-compose

all:
  children:
    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:

        app: electric
        apps:       # define all applications
          electric: # app name, should have corresponding ~/pigsty/app/electric folder
            conf:   # override /opt/electric/.env config file : https://electric-sql.com/docs/api/config
              DATABASE_URL: 'postgresql://electric:[email protected]:5432/electric?sslmode=require'
              ELECTRIC_PORT: 8002
              ELECTRIC_PROMETHEUS_PORT: 8003
              ELECTRIC_INSECURE: true
              #ELECTRIC_SECRET: 1U6ItbhoQb4kGUU5wXBLbxvNf

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: electric ,password: DBUser.Electric ,pgbouncer: true , replication: true ,roles: [dbrole_admin] ,comment: electric main user }
        pg_databases: [{ name: electric , owner: electric }]
        pg_hba_rules:
          - { user: electric , db: replication ,addr: infra ,auth: ssl ,title: 'allow electric intranet/docker ssl access' }

  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home : { domain: i.pigsty }
      electric:
        domain: elec.pigsty
        endpoint: "${admin_ip}:8002"
        websocket: true               # apply free ssl cert with certbot: make cert
        certbot: odoo.pigsty          # <----- replace with your own domain name!

    #----------------------------------#
    # Safe Guard
    #----------------------------------#
    # you can enable these flags after bootstrap, to prevent purging running etcd / pgsql instances
    etcd_safeguard: false             # prevent purging running etcd instance?
    pg_safeguard: false               # prevent purging running postgres instance? false by default

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/electric template provides a one-click deployment solution for Electric SQL real-time sync service.

What is Electric:

  • PostgreSQL to client real-time data sync service
  • Supports Local-first application architecture
  • Real-time syncs data changes via logical replication
  • Provides HTTP API for frontend application consumption

Key Features:

  • Uses Pigsty-managed PostgreSQL as data source
  • Captures data changes via Logical Replication
  • Supports SSL encrypted connections
  • Built-in Prometheus metrics endpoint

Access:

# Electric API endpoint
http://elec.pigsty:8002

# Prometheus metrics
http://elec.pigsty:8003/metrics

Use Cases:

  • Building Local-first applications
  • Real-time data sync to clients
  • Mobile and PWA data synchronization
  • Real-time updates for collaborative applications

Notes:

  • Electric user needs replication permission
  • PostgreSQL logical replication must be enabled
  • Production environments should use SSL connection (configured with sslmode=require)

27 - app/maybe

Deploy Maybe personal finance management system using Pigsty-managed PostgreSQL

The app/maybe configuration template provides a reference configuration for deploying Maybe open-source personal finance management system, using Pigsty-managed PostgreSQL as the database.


Overview

  • Config Name: app/maybe
  • Node Count: Single node
  • Description: Deploy Maybe finance management using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/maybe [-i <primary_ip>]

Content

Source: pigsty/conf/app/maybe.yml

---
#==============================================================#
# File      :   maybe.yml
# Desc      :   pigsty config for running 1-node maybe app
# Ctime     :   2025-09-08
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/maybe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/maybe
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/maybe  # Use this maybe config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install maybe

all:
  children:

    # the maybe application (personal finance management)
    maybe:
      hosts: { 10.10.10.10: {} }
      vars:
        app: maybe   # specify app name to be installed (in the apps)
        apps:        # define all applications
          maybe:     # app name, should have corresponding ~/pigsty/app/maybe folder
            file:    # optional directory to be created
              - { path: /data/maybe             ,state: directory ,mode: 0755 }
              - { path: /data/maybe/storage     ,state: directory ,mode: 0755 }
            conf:    # override /opt/<app>/.env config file
              # Core Configuration
              MAYBE_VERSION: latest                    # Maybe image version
              MAYBE_PORT: 5002                         # Port to expose Maybe service
              MAYBE_DATA: /data/maybe                  # Data directory for Maybe
              APP_DOMAIN: maybe.pigsty                 # Domain name for Maybe
              
              # REQUIRED: Generate with: openssl rand -hex 64
              SECRET_KEY_BASE: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
              
              # Database Configuration
              DB_HOST: 10.10.10.10                    # PostgreSQL host
              DB_PORT: 5432                           # PostgreSQL port
              DB_USERNAME: maybe                      # PostgreSQL username
              DB_PASSWORD: MaybeFinance2025           # PostgreSQL password (CHANGE THIS!)
              DB_DATABASE: maybe_production           # PostgreSQL database name
              
              # Optional: API Integration
              #SYNTH_API_KEY:                         # Get from synthfinance.com

    # the maybe database
    pg-maybe:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-maybe
        pg_users:
          - { name: maybe    ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for maybe service }
          - { name: maybe_ro ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for maybe service  }
          - { name: maybe_rw ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for maybe service }
        pg_databases:
          - { name: maybe_production ,owner: maybe ,revokeconn: true ,comment: maybe main database  }
        pg_hba_rules:
          - { user: maybe ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow maybe access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # infra services exposed via portal
      home  : { domain: i.pigsty }    # default domain name
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      maybe:                          # nginx server config for maybe
        domain: maybe.pigsty          # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5002"  # maybe service endpoint: IP:PORT
        websocket: true               # add websocket support

    repo_enabled: false
    node_repo_modules: node,infra,pgsql

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

Explanation

The app/maybe template provides a one-click deployment solution for Maybe open-source personal finance management system.

What is Maybe:

  • Open-source personal and family finance management system
  • Supports multi-account, multi-currency asset tracking
  • Provides investment portfolio analysis and net worth calculation
  • Beautiful modern web interface

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Maybe’s built-in database
  • Data persisted to independent directory /data/maybe
  • Supports HTTPS and custom domain names
  • Multi-user permission management

Access:

# Maybe Web interface
http://maybe.pigsty:5002

# Or via Nginx proxy
https://maybe.pigsty

Use Cases:

  • Personal or family finance management
  • Investment portfolio tracking and analysis
  • Multi-account asset aggregation
  • Alternative to commercial services like Mint, YNAB

Notes:

  • Must change SECRET_KEY_BASE, generate with openssl rand -hex 64
  • First access requires registering an admin account
  • Optionally configure Synth API for stock price data

28 - app/teable

Deploy Teable open-source Airtable alternative using Pigsty-managed PostgreSQL

The app/teable configuration template provides a reference configuration for deploying Teable open-source no-code database, using Pigsty-managed PostgreSQL as the database.


Overview

  • Config Name: app/teable
  • Node Count: Single node
  • Description: Deploy Teable using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/teable [-i <primary_ip>]

Content

Source: pigsty/conf/app/teable.yml

---
#==============================================================#
# File      :   teable.yml
# Desc      :   pigsty config for running 1-node teable app
# Ctime     :   2025-02-24
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/teable
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/teable # use this teable config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install teable with docker-compose
#
# To replace domain name:
#   sed -ie 's/teable.pigsty/teable.pigsty.cc/g' pigsty.yml

all:
  children:

    # the teable application
    teable:
      hosts: { 10.10.10.10: {} }
      vars:
        app: teable   # specify app name to be installed (in the apps)
        apps:         # define all applications
          teable:     # app name, ~/pigsty/app/teable folder
            conf:     # override /opt/teable/.env config file
              # https://github.com/teableio/teable/blob/develop/dockers/examples/standalone/.env
              # https://help.teable.io/en/deploy/env
              POSTGRES_HOST: "10.10.10.10"
              POSTGRES_PORT: "5432"
              POSTGRES_DB: "teable"
              POSTGRES_USER: "dbuser_teable"
              POSTGRES_PASSWORD: "DBUser.Teable"
              PRISMA_DATABASE_URL: "postgresql://dbuser_teable:[email protected]:5432/teable"
              PUBLIC_ORIGIN: "http://tea.pigsty"
              PUBLIC_DATABASE_PROXY: "10.10.10.10:5432"
              TIMEZONE: "UTC"

              # Need to support sending emails to enable the following configurations
              #BACKEND_MAIL_HOST: smtp.teable.io
              #BACKEND_MAIL_PORT: 465
              #BACKEND_MAIL_SECURE: true
              #BACKEND_MAIL_SENDER: noreply.teable.io
              #BACKEND_MAIL_SENDER_NAME: Teable
              #BACKEND_MAIL_AUTH_USER: username
              #BACKEND_MAIL_AUTH_PASS: password


    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_teable ,password: DBUser.Teable ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: teable superuser }
        pg_databases:
          - { name: teable ,owner: dbuser_teable ,comment: teable database }
        pg_hba_rules:
          - { user: teable ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow teable access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345
    infra_portal:                        # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      teable:                            # nginx server config for teable
        domain: tea.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8890"     # teable service endpoint: IP:PORT
        websocket: true                  # add websocket support
        certbot: tea.pigsty              # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/teable template provides a one-click deployment solution for Teable open-source no-code database.

What is Teable:

  • Open-source Airtable alternative
  • No-code database built on PostgreSQL
  • Supports table, kanban, calendar, form, and other views
  • Provides API and automation workflows

Key Features:

  • Uses Pigsty-managed PostgreSQL as underlying storage
  • Data is stored in real PostgreSQL tables
  • Supports direct SQL queries
  • Can integrate with other PostgreSQL tools and extensions

Access:

# Teable Web interface
http://tea.pigsty:8890

# Or via Nginx proxy
https://tea.pigsty

# Direct SQL access to underlying data
psql postgresql://dbuser_teable:[email protected]:5432/teable

Use Cases:

  • Need Airtable-like functionality but want to self-host
  • Team collaboration data management
  • Need both API and SQL access
  • Want data stored in real PostgreSQL

Notes:

  • Teable user needs superuser privileges
  • Must configure PUBLIC_ORIGIN to external access address
  • Supports email notifications (optional SMTP configuration)

29 - app/registry

Deploy Docker Registry image proxy and private registry using Pigsty

The app/registry configuration template provides a reference configuration for deploying Docker Registry as an image proxy, usable as Docker Hub mirror acceleration or private image registry.


Overview

  • Config Name: app/registry
  • Node Count: Single node
  • Description: Deploy Docker Registry image proxy and private registry
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/registry [-i <primary_ip>]

Content

Source: pigsty/conf/app/registry.yml

---
#==============================================================#
# File      :   registry.yml
# Desc      :   pigsty config for running Docker Registry Mirror
# Ctime     :   2025-07-01
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/registry
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/registry
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./configure -c app/registry   # use this registry config template
# vi pigsty.yml                 # IMPORTANT: CHANGE DOMAIN & CREDENTIALS!
# ./deploy.yml                  # install pigsty
# ./docker.yml                  # install docker & docker-compose
# ./app.yml                     # install registry with docker-compose
#
# To replace domain name:
#   sed -ie 's/registry.pigsty/registry.your-domain.com/g' pigsty.yml

#==============================================================#
# Usage Instructions:
#==============================================================#
#
# 1. Deploy the registry:
#    ./configure -c conf/app/registry.yml && ./deploy.yml && ./docker.yml && ./app.yml
#
# 2. Configure Docker clients to use the mirror:
#    Edit /etc/docker/daemon.json:
#    {
#      "registry-mirrors": ["https://registry.your-domain.com"],
#      "insecure-registries": ["registry.your-domain.com"]
#    }
#
# 3. Restart Docker daemon:
#    sudo systemctl restart docker
#
# 4. Test the registry:
#    docker pull nginx:latest  # This will now use your mirror
#
# 5. Access the web UI (optional):
#    https://registry-ui.your-domain.com
#
# 6. Monitor the registry:
#    curl https://registry.your-domain.com/v2/_catalog
#    curl https://registry.your-domain.com/v2/nginx/tags/list
#
#==============================================================#


all:
  children:

    # the docker registry mirror application
    registry:
      hosts: { 10.10.10.10: {} }
      vars:
        app: registry                    # specify app name to be installed
        apps:                            # define all applications
          registry:
            file:                        # create data directory for registry
              - { path: /data/registry ,state: directory ,mode: 0755 }
            conf:                        # environment variables for registry
              REGISTRY_DATA: /data/registry
              REGISTRY_PORT: 5000
              REGISTRY_UI_PORT: 5080
              REGISTRY_STORAGE_DELETE_ENABLED: true
              REGISTRY_LOG_LEVEL: info
              REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io
              REGISTRY_PROXY_TTL: 168h

    # basic infrastructure
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                      # pigsty version string
    admin_ip: 10.10.10.10                # admin node ip address
    region: default                      # upstream mirror region: default,china,europe
    infra_portal:                        # infra services exposed via portal
      home : { domain: i.pigsty }        # default domain name

      # Docker Registry Mirror service configuration
      registry:                          # nginx server config for registry
        domain: d.pigsty                 # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5000"     # registry service endpoint: IP:PORT
        websocket: false                 # registry doesn't need websocket
        certbot: d.pigsty                # certbot cert name, apply with `make cert`

      # Optional: Registry Web UI
      registry-ui:                       # nginx server config for registry UI
        domain: dui.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5080"     # registry UI endpoint: IP:PORT
        websocket: false                 # UI doesn't need websocket
        certbot: d.pigsty                # certbot cert name for UI

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/registry template provides a one-click deployment solution for Docker Registry image proxy.

What is Registry:

  • Docker’s official image registry implementation
  • Can serve as Docker Hub pull-through cache
  • Can also serve as private image registry
  • Supports image caching and local storage

Key Features:

  • Acts as proxy cache for Docker Hub to accelerate image pulls
  • Caches images to local storage /data/registry
  • Provides Web UI to view cached images
  • Supports custom cache expiration time

Configure Docker Client:

# Edit /etc/docker/daemon.json
{
  "registry-mirrors": ["https://d.pigsty"],
  "insecure-registries": ["d.pigsty"]
}

# Restart Docker
sudo systemctl restart docker

Access:

# Registry API
https://d.pigsty/v2/_catalog

# Web UI
http://dui.pigsty:5080

# Pull images (automatically uses proxy)
docker pull nginx:latest

Use Cases:

  • Accelerate Docker image pulls (especially in mainland China)
  • Reduce external network dependency
  • Enterprise internal private image registry
  • Offline environment image distribution

Notes:

  • Requires sufficient disk space to store cached images
  • Default cache TTL is 7 days (REGISTRY_PROXY_TTL: 168h)
  • Can configure HTTPS certificates (via certbot)

30 - Misc Templates

31 - demo/el

Configuration template optimized for Enterprise Linux (RHEL/Rocky/Alma)

The demo/el configuration template is optimized for Enterprise Linux family distributions (RHEL, Rocky Linux, Alma Linux, Oracle Linux).


Overview

  • Config Name: demo/el
  • Node Count: Single node
  • Description: Enterprise Linux optimized configuration template
  • OS Distro: el8, el9, el10
  • OS Arch: x86_64, aarch64
  • Related: meta, demo/debian

Usage:

./configure -c demo/el [-i <primary_ip>]

Content

Source: pigsty/conf/demo/el.yml

---
#==============================================================#
# File      :   el.yml
# Desc      :   Default parameters for EL System in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2025-12-27
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://doc.pgsty.com/pgsql/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://doc.pgsty.com/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty'  }} # used by intranet nodes
      - { name: pigsty-infra   ,description: 'Pigsty INFRA'       ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/infra/$basearch' ,china: 'https://repo.pigsty.cc/yum/infra/$basearch' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PGSQL'       ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/pgsql/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/pgsql/el$releasever.$basearch' }}
      - { name: nginx          ,description: 'Nginx Repo'         ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://nginx.org/packages/rhel/$releasever/$basearch/' }}
      - { name: docker-ce      ,description: 'Docker CE'          ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable'    ,china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable' ,europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
      - { name: baseos         ,description: 'EL 8+ BaseOS'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/'     }}
      - { name: appstream      ,description: 'EL 8+ AppStream'    ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/'  ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/'      ,europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/'  }}
      - { name: extras         ,description: 'EL 8+ Extras'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/'     }}
      - { name: powertools     ,description: 'EL 8 PowerTools'    ,module: node    ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/'     ,europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
      - { name: crb            ,description: 'EL 9 CRB'           ,module: node    ,releases: [  9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/'        ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/'            ,europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/'        }}
      - { name: epel           ,description: 'EL 8+ EPEL'         ,module: node    ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever/Everything/$basearch/'         ,europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/'     }}
      - { name: epel           ,description: 'EL 10 EPEL'         ,module: node    ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever.0/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever.0/Everything/$basearch/'     ,europe: 'https://mirrors.xtom.de/epel/$releasever.0/Everything/$basearch/'   }}
      - { name: pgdg-common    ,description: 'PostgreSQL Common'  ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-el8fix    ,description: 'PostgreSQL EL8FIX'  ,module: pgsql   ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  }}
      - { name: pgdg-el9fix    ,description: 'PostgreSQL EL9FIX'  ,module: pgsql   ,releases: [  9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   }}
      - { name: pgdg-el10fix   ,description: 'PostgreSQL EL10FIX' ,module: pgsql   ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' }}
      - { name: pgdg13         ,description: 'PostgreSQL 13'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14         ,description: 'PostgreSQL 14'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15         ,description: 'PostgreSQL 15'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16         ,description: 'PostgreSQL 16'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17         ,description: 'PostgreSQL 17'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18         ,description: 'PostgreSQL 18'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-beta      ,description: 'PostgreSQL Testing' ,module: beta    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  }}
      - { name: pgdg-extras    ,description: 'PostgreSQL Extra'   ,module: extra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      }}
      - { name: pgdg13-nonfree ,description: 'PostgreSQL 13+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14-nonfree ,description: 'PostgreSQL 14+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15-nonfree ,description: 'PostgreSQL 15+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16-nonfree ,description: 'PostgreSQL 16+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17-nonfree ,description: 'PostgreSQL 17+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18-nonfree ,description: 'PostgreSQL 18+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch'  }}
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/percona/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/percona/el$releasever.$basearch' ,origin: 'http://repo.percona.com/ppg-18.1/yum/release/$releasever/RPMS/$basearch'  }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/mssql/el$releasever.$basearch', china: 'https://repo.pigsty.cc/yum/mssql/el$releasever.$basearch' , origin: 'https://download.copr.fedorainfracloud.org/results/wiltondb/wiltondb/epel-$releasever-$basearch/' }}
      - { name: groonga        ,description: 'Groonga'            ,module: groonga ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/almalinux/$releasever/$basearch/' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/yum/mysql-8.4-community/el/$releasever/$basearch/' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' ,china: 'https://mirrors.aliyun.com/mongodb/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpmfind.net/linux/remi/enterprise/$releasever/redis72/$basearch/' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpm.grafana.com', china: 'https://mirrors.aliyun.com/grafana/yum/' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/rpm/', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/rpm/' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/el/$releasever/$basearch' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/el/$releasever/$basearch' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/rpm/stable/', china: 'https://mirrors.aliyun.com/clickhouse/rpm/stable/' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node_exporter,blackbox_exporter,nginx_exporter,pg_exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx
    infra_packages_pip: ''            # pip installed packages for infra nodes

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib,yum,audit,bind-utils,readline,vim-minimal,node_exporter,grubby,openssh-server,openssh-clients,chkconfig,vector

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_safeguard: false            # prevent purging running redis instance?
    redis_clean: true                 # purging existing redis during init?
    redis_rmdata: true                # remove redis data when purging redis server?
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 26                   # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 17 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: 30                        # recovery time objective in seconds,  `30s` by default
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
    #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
    #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

Explanation

The demo/el template is optimized for Enterprise Linux family distributions.

Supported Distributions:

  • RHEL 8/9/10
  • Rocky Linux 8/9/10
  • Alma Linux 8/9/10
  • Oracle Linux 8/9

Key Features:

  • Uses EPEL and PGDG repositories
  • Optimized for YUM/DNF package manager
  • Supports EL-specific package names

Use Cases:

  • Enterprise production environments (RHEL/Rocky/Alma recommended)
  • Long-term support and stability requirements
  • Environments using Red Hat ecosystem

32 - demo/debian

Configuration template optimized for Debian/Ubuntu

The demo/debian configuration template is optimized for Debian and Ubuntu distributions.


Overview

  • Config Name: demo/debian
  • Node Count: Single node
  • Description: Debian/Ubuntu optimized configuration template
  • OS Distro: d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, demo/el

Usage:

./configure -c demo/debian [-i <primary_ip>]

Content

Source: pigsty/conf/demo/debian.yml

---
#==============================================================#
# File      :   debian.yml
# Desc      :   Default parameters for Debian/Ubuntu in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2025-12-27
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://doc.pgsty.com/pgsql/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://doc.pgsty.com/pgsql/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://doc.pgsty.com/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://doc.pgsty.com/pgsql/extension
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://doc.pgsty.com/pgsql/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        node_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty ./' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PgSQL'       ,module: pgsql   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/pgsql/${distro_codename} ${distro_codename} main', china: 'https://repo.pigsty.cc/apt/pgsql/${distro_codename} ${distro_codename} main' }}
      - { name: pigsty-infra   ,description: 'Pigsty Infra'       ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/infra/ generic main' ,china: 'https://repo.pigsty.cc/apt/infra/ generic main' }}
      - { name: nginx          ,description: 'Nginx'              ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://nginx.org/packages/${distro_name} ${distro_codename} nginx' }}
      - { name: docker-ce      ,description: 'Docker'             ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable'                               ,china: 'https://mirrors.aliyun.com/docker-ce/linux/${distro_name} ${distro_codename} stable' }}
      - { name: base           ,description: 'Debian Basic'       ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename} main non-free-firmware'                                  ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename} main restricted universe multiverse' }}
      - { name: updates        ,description: 'Debian Updates'     ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename}-updates main non-free-firmware'                          ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename}-updates main restricted universe multiverse' }}
      - { name: security       ,description: 'Debian Security'    ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://security.debian.org/debian-security ${distro_codename}-security main non-free-firmware'            ,china: 'https://mirrors.aliyun.com/debian-security/ ${distro_codename}-security main non-free-firmware' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}           main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}             main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-backports   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-security    main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-updates     main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: pgdg           ,description: 'PGDG'               ,module: pgsql   ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg main' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg main' }}
      - { name: pgdg-beta      ,description: 'PGDG Beta'          ,module: beta    ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg-testing main 19' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg-testing main 19' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/${distro_name}/ ${distro_codename} main' }}
      - { name: citus          ,description: 'Citus'              ,module: extra   ,releases: [11,12,   20,22   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/citusdata/community/${distro_name}/ ${distro_codename} main' } }
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/percona ${distro_codename} main' ,china: 'https://repo.pigsty.cc/apt/percona ${distro_codename} main' ,origin: 'http://repo.percona.com/ppg-18.1/apt ${distro_codename} main' }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/mssql/ ${distro_codename} main'  ,china: 'https://repo.pigsty.cc/apt/mssql/ ${distro_codename} main'  ,origin: 'https://ppa.launchpadcontent.net/wiltondb/wiltondb/ubuntu/ ${distro_codename} main'  }}
      - { name: groonga        ,description: 'Groonga Debian'     ,module: groonga ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/debian/ ${distro_codename} main' }}
      - { name: groonga        ,description: 'Groonga Ubuntu'     ,module: groonga ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/groonga/ppa/ubuntu/ ${distro_codename} main' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse', china: 'https://mirrors.aliyun.com/mongodb/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.redis.io/deb ${distro_codename} main' }}
      - { name: llvm           ,description: 'LLVM'               ,module: llvm    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.llvm.org/${distro_codename}/ llvm-toolchain-${distro_codename} main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/llvm-apt/${distro_codename}/ llvm-toolchain-${distro_codename} main' }}
      - { name: haproxyd       ,description: 'Haproxy Debian'     ,module: haproxy ,releases: [11,12            ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://haproxy.debian.net/ ${distro_codename}-backports-3.1 main' }}
      - { name: haproxyu       ,description: 'Haproxy Ubuntu'     ,module: haproxy ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/vbernat/haproxy-3.1/ubuntu/ ${distro_codename} main' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.grafana.com stable main' ,china: 'https://mirrors.aliyun.com/grafana/apt/ stable main' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/${distro_name}/ ${distro_codename} main' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/${distro_name}/ ${distro_codename} main' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/deb/ stable main', china: 'https://mirrors.aliyun.com/clickhouse/deb/ stable main' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node-exporter,blackbox-exporter,nginx-exporter,pg-exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx
    infra_packages_pip: ''            # pip installed packages for infra nodes

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,node-exporter,openssh-server,openssh-client,vector

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: off, none, zone, zone by default
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_safeguard: false            # prevent purging running redis instance?
    redis_clean: true                 # purging existing redis during init?
    redis_rmdata: true                # remove redis data when purging redis server?
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 543                  # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 18 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: 30                        # recovery time objective in seconds,  `30s` by default
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
      #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
      #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

Explanation

The demo/debian template is optimized for Debian and Ubuntu distributions.

Supported Distributions:

  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 LTS (Jammy)
  • Ubuntu 24.04 LTS (Noble)

Key Features:

  • Uses PGDG APT repositories
  • Optimized for APT package manager
  • Supports Debian/Ubuntu-specific package names

Use Cases:

  • Cloud servers (Ubuntu widely used)
  • Container environments (Debian commonly used as base image)
  • Development and testing environments

33 - demo/demo

Pigsty public demo site configuration, showcasing SSL certificates, domain exposure, and full extension installation

The demo/demo configuration template is used by Pigsty’s public demo site, demonstrating how to expose services publicly, configure SSL certificates, and install all available extensions.

If you want to set up your own public service on a cloud server, you can use this template as a reference.


Overview

  • Config Name: demo/demo
  • Node Count: Single node
  • Description: Pigsty public demo site configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta, rich

Usage:

./configure -c demo/demo [-i <primary_ip>]

Key Features

This template enhances the meta template with:

  • SSL certificate and custom domain configuration (e.g., pigsty.cc)
  • Downloads and installs all available PostgreSQL 18 extensions
  • Enables Docker with image acceleration
  • Deploys MinIO object storage
  • Pre-configures multiple business databases and users
  • Adds Redis primary-replica instance examples
  • Adds FerretDB MongoDB-compatible cluster
  • Adds Kafka sample cluster

Content

Source: pigsty/conf/demo/demo.yml

---
#==============================================================#
# File      :   demo.yml
# Desc      :   Pigsty Public Demo Configuration
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:
        nodename: pigsty.cc       # overwrite the default hostname
        node_id_from_pg: false    # do not use the pg identity as hostname
        docker_enabled: true      # enable docker on this node
        docker_registry_mirrors: ["https://mirror.ccs.tencentyun.com", "https://docker.1ms.run"]
        # ./pgsql-monitor.yml -l infra     # monitor 'external' PostgreSQL instance
        pg_exporters:             # treat local postgres as RDS for demonstration purpose
          20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 }
          #20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.11 , pg_port: 5432 }
          #20003: { pg_cluster: pg-bar, pg_seq: 2, pg_host: 10.10.10.12 , pg_exporter_url: 'postgres://dbuser_monitor:[email protected]:5432/postgres?sslmode=disable' }
          #20004: { pg_cluster: pg-bar, pg_seq: 3, pg_host: 10.10.10.13 , pg_monitor_username: dbuser_monitor, pg_monitor_password: DBUser.Monitor }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta       ,password: DBUser.Meta       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view       ,password: DBUser.Viewer     ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana    ,password: DBUser.Grafana    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase   ,password: DBUser.Bytebase   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong       ,password: DBUser.Kong       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea      ,password: DBUser.Gitea      ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki       ,password: DBUser.Wiki       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco       ,password: DBUser.Noco       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
          - {name: dbuser_odoo       ,password: DBUser.Odoo       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for odoo service ,createdb: true } #,superuser: true}
          - {name: dbuser_mattermost ,password: DBUser.MatterMost ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for mattermost ,createdb: true }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector},{name: postgis},{name: timescaledb}]}
          - {name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database  }
          - {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - {name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong api gateway database }
          - {name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - {name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database  }
          - {name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database     }
          #- {name: odoo     ,owner: dbuser_odoo     ,revokeconn: true ,comment: odoo main database  }
          - {name: mattermost ,owner: dbuser_mattermost ,revokeconn: true ,comment: mattermost main database }
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_libs: 'timescaledb,pg_stat_statements, auto_explain'  # add timescaledb to shared_preload_libraries
        pg_extensions: # extensions to be installed on this cluster
          - timescaledb timescaledb_toolkit pg_timeseries periods temporal_tables emaj table_version pg_cron pg_task pg_later pg_background
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash #mobilitydb
          - pgvector vchord pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pgroonga pg_bigm zhparser pg_bestmatch vchord_bm25 hunspell
          - citus hydra pg_analytics pg_duckdb pg_mooncake duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #pg_strom
          - age hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pg_incremental pgmq pgq pg_cardano omnigres #rdkit
          - pg_tle plv8 pllua plprql pldebugger plpgsql_check plprofiler plsh pljava #plr #pgtap #faker #dbt2
          - pg_prefix pg_semver pgunit pgpdf pglite_fusion md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_xenophile pg_currency pg_collection pgmp numeral pg_rational pguint pg_uint128 hashtypes ip4r pg_uri pgemailaddr pg_acl timestamp9 chkpass #pg_duration #debversion #pg_rrule
          - pg_gzip pg_bzip pg_zstd pg_http pg_net pg_curl pgjq pgjwt pg_smtp_client pg_html5_email_address url_encode pgsql_tweaks pg_extra_time pgpcre icu_ext pgqr pg_protobuf envvar floatfile pg_readme ddl_historization data_historization pg_schedoc pg_hashlib pg_xxhash shacrypt cryptint pg_ecdsa pgsparql
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids topn quantile lower_quantile count_distinct omnisketch ddsketch vasco pgxicor tdigest first_last_agg extra_window_functions floatvec aggs_for_vecs aggs_for_arrays pg_arraymath pg_math pg_random pg_base36 pg_base62 pg_base58 pg_financial
          - pg_repack pg_squeeze pg_dirtyread pgfincore pg_cooldown pg_ddlx pg_prioritize pg_checksums pg_readonly pg_upless pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_fio pg_savior safeupdate pg_drop_events table_log #pgagent #pgpool
          - pg_profile pg_tracing pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_store_plans pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt pg_anon pg_tde pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils pg_noset
          - wrappers multicorn odbc_fdw jdbc_fdw mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw mongo_fdw redis_fdw pg_redis_pubsub kafka_fdw hdfs_fdw firebird_fdw aws_s3 log_fdw #oracle_fdw #db2_fdw
          - documentdb orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots db_migrator wal2json wal2mongo decoderbufs decoder_raw mimeo pg_fact_loader pg_bulkload #repmgr

    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' }, 6381: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    # ./mongo.yml -l pg-mongo
    pg-mongo:
      hosts: { 10.10.10.10: { mongo_seq: 1 } }
      vars:
        mongo_cluster: pg-mongo
        mongo_pgurl: 'postgres://dbuser_meta:[email protected]:5432/grafana'

    # ./kafka.yml -l kf-main
    kf-main:
      hosts: { 10.10.10.10: { kafka_seq: 1, kafka_role: controller } }
      vars:
        kafka_cluster: kf-main
        kafka_peer_port: 9093


  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe

    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      cc           : { domain: pigsty.cc      ,path:     "/www/pigsty.cc"   ,cert: /etc/cert/pigsty.cc.crt ,key: /etc/cert/pigsty.cc.key }
      minio        : { domain: m.pigsty.cc    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty.cc  ,endpoint: "127.0.0.1:8884"   }
      pgadmin      : { domain: adm.pigsty.cc  ,endpoint: "127.0.0.1:8885"   }
      pgweb        : { domain: cli.pigsty.cc  ,endpoint: "127.0.0.1:8886"   }
      bytebase     : { domain: ddl.pigsty.cc  ,endpoint: "127.0.0.1:8887"   }
      jupyter      : { domain: lab.pigsty.cc  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty.cc  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty.cc ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty.cc ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty.cc ,endpoint: "10.10.10.10:8000" ,websocket: true }
      dify         : { domain: dify.pigsty.cc ,endpoint: "10.10.10.10:8001" ,websocket: true }
      odoo         : { domain: odoo.pigsty.cc ,endpoint: "127.0.0.1:8069"   ,websocket: true }
      mm           : { domain: mm.pigsty.cc   ,endpoint: "10.10.10.10:8065" ,websocket: true }
    # scp -r ~/pgsty/cc/cert/*       pj:/etc/cert/       # copy https certs
    # scp -r ~/dev/pigsty.cc/public  pj:/www/pigsty.cc   # copy pigsty.cc website


    node_etc_hosts: [ "${admin_ip} sss.pigsty" ]
    node_timezone: Asia/Hong_Kong
    node_ntp_servers:
      - pool cn.pool.ntp.org iburst
      - pool ${admin_ip} iburst       # assume non-admin nodes does not have internet access
    pgbackrest_enabled: false         # do not take backups since this is disposable demo env
    #prometheus_options: '--storage.tsdb.retention.time=15d' # prometheus extra server options
    prometheus_options: '--storage.tsdb.retention.size=3GB' # keep 3GB data at most on demo env

    # install all postgresql18 extensions
    pg_version: 18                    # default postgres version
    repo_extra_packages: [ pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ] #,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The demo/demo template is Pigsty’s public demo configuration, showcasing a complete production-grade deployment example.

Key Features:

  • HTTPS certificate and custom domain configuration
  • All available PostgreSQL extensions installed
  • Integration with Redis, FerretDB, Kafka, and other components
  • Docker image acceleration configured

Use Cases:

  • Setting up public demo sites
  • Scenarios requiring complete feature demonstration
  • Learning Pigsty advanced configuration

Notes:

  • SSL certificate files must be prepared
  • DNS resolution must be configured
  • Some extensions are not available on ARM64 architecture

34 - demo/minio

Four-node x four-drive high-availability multi-node multi-disk MinIO cluster demo

The demo/minio configuration template demonstrates how to deploy a four-node x four-drive, 16-disk total high-availability MinIO cluster, providing S3-compatible object storage services.

For more tutorials, see the MINIO module documentation.


Overview

  • Config Name: demo/minio
  • Node Count: Four nodes
  • Description: High-availability multi-node multi-disk MinIO cluster demo
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c demo/minio

Note: This is a four-node template. You need to modify the IP addresses of the other three nodes after generating the configuration.


Content

Source: pigsty/conf/demo/minio.yml

---
#==============================================================#
# File      :   minio.yml
# Desc      :   pigsty: 4 node x 4 disk MNMD minio clusters
# Ctime     :   2023-01-07
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# One pass installation with:
# ./deploy.yml
#==============================================================#
# 1.  minio-1 @ 10.10.10.10:9000 -  - (9002) svc <-x  10.10.10.9:9002
# 2.  minio-2 @ 10.10.10.11:9000 -xx- (9002) svc <-x <----------------
# 3.  minio-3 @ 10.10.10.12:9000 -xx- (9002) svc <-x  sss.pigsty:9002
# 4.  minio-4 @ 10.10.10.12:9000 -  - (9002) svc <-x  (intranet dns)
#==============================================================#
# use minio load balancer service (9002) instead of direct access (9000)
# mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO
#==============================================================#
# https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.html
# MINIO_VOLUMES="https://minio-{1...4}.pigsty:9000/data{1...4}/minio"


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # minio cluster with 4 nodes and 4 drivers per node
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
        10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
        10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
        10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
      vars:
        minio_cluster: minio
        minio_data: '/data{1...4}'
        minio_buckets:                    # list of minio bucket to be created
          - { name: pgsql }
          - { name: meta ,versioning: true }
          - { name: data }
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

        # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
        node_cluster: minio
        vip_enabled: true
        vip_vrid: 128
        vip_address: 10.10.10.9
        vip_interface: eth1

        # expose minio service with haproxy on all nodes
        haproxy_services:
          - name: minio                    # [REQUIRED] service name, unique
            port: 9002                     # [REQUIRED] service port, unique
            balance: leastconn             # [OPTIONAL] load balancer algorithm
            options:                       # [OPTIONAL] minio health check
              - option httpchk
              - option http-keep-alive
              - http-check send meth OPTIONS uri /minio/health/live
              - http-check expect status 200
            servers:
              - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

      # domain names to access minio web console via nginx web portal (optional)
      minio        : { domain: m.pigsty     ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio10      : { domain: m10.pigsty   ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio11      : { domain: m11.pigsty   ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
      minio12      : { domain: m12.pigsty   ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
      minio13      : { domain: m13.pigsty   ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }

    minio_endpoint: https://sss.pigsty:9002   # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
...

Explanation

The demo/minio template is a production-grade reference configuration for MinIO, showcasing Multi-Node Multi-Drive (MNMD) architecture.

Key Features:

  • Multi-Node Multi-Drive Architecture: 4 nodes × 4 drives = 16-drive erasure coding group
  • L2 VIP High Availability: Virtual IP binding via Keepalived
  • HAProxy Load Balancing: Unified access endpoint on port 9002
  • Fine-grained Permissions: Separate users and buckets for different applications

Access:

# Configure MinIO alias with mcli (via HAProxy load balancing)
mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO

# List buckets
mcli ls sss/

# Use console
# Visit https://m.pigsty or https://m10-m13.pigsty

Use Cases:

  • Environments requiring S3-compatible object storage
  • PostgreSQL backup storage (pgBackRest remote repository)
  • Data lake for big data and AI workloads
  • Production environments requiring high-availability object storage

Notes:

  • Each node requires 4 independent disks mounted at /data1 - /data4
  • Production environments recommend at least 4 nodes for erasure coding redundancy
  • VIP requires proper network interface configuration (vip_interface)

35 - build/oss

Pigsty open-source edition offline package build environment configuration

The build/oss configuration template is the build environment configuration for Pigsty open-source edition offline packages, used to batch-build offline installation packages across multiple operating systems.

This configuration is intended for developers and contributors only.


Overview

  • Config Name: build/oss
  • Node Count: Six nodes (el9, el10, d12, d13, u22, u24)
  • Description: Pigsty open-source edition offline package build environment
  • OS Distro: el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64

Usage:

cp conf/build/oss.yml pigsty.yml

Note: This is a build template with fixed IP addresses, intended for internal use only.


Content

Source: pigsty/conf/build/oss.yml

---
#==============================================================#
# File      :   oss.yml
# Desc      :   Pigsty 3-node building env (PG18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-12
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}'
    repo_modules: infra,node,pgsql
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions:                 [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap, pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1 ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 3, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 4, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 5, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 6, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 7, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

Explanation

The build/oss template is the build configuration for Pigsty open-source edition offline packages.

Build Contents:

  • PostgreSQL 18 and all categorized extension packages
  • Infrastructure packages (Prometheus, Grafana, Nginx, etc.)
  • Node packages (monitoring agents, tools, etc.)
  • Extra modules

Supported Operating Systems:

  • EL9 (Rocky/Alma/RHEL 9)
  • EL10 (Rocky 10 / RHEL 10)
  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 (Jammy)
  • Ubuntu 24.04 (Noble)

Build Process:

# 1. Prepare build environment
cp conf/build/oss.yml pigsty.yml

# 2. Download packages on each node
./infra.yml -t repo_build

# 3. Package offline installation files
make cache

Use Cases:

  • Pigsty developers building new versions
  • Contributors testing new extensions
  • Enterprise users customizing offline packages

36 - build/pro

Pigsty professional edition offline package build environment configuration (multi-version)

The build/pro configuration template is the build environment configuration for Pigsty professional edition offline packages, including PostgreSQL 13-18 all versions and additional commercial components.

This configuration is intended for developers and contributors only.


Overview

  • Config Name: build/pro
  • Node Count: Six nodes (el9, el10, d12, d13, u22, u24)
  • Description: Pigsty professional edition offline package build environment (multi-version)
  • OS Distro: el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64

Usage:

cp conf/build/pro.yml pigsty.yml

Note: This is a build template with fixed IP addresses, intended for internal use only.


Content

Source: pigsty/conf/build/pro.yml

---
#==============================================================#
# File      :   pro.yml
# Desc      :   Pigsty 6-node pro building env (PG 13-18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-15
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}/pro'
    repo_modules: infra,node,pgsql
    pg_extensions: []
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
      pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
      pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
      pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
      pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
      infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
    ]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1  ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 9, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 1, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 2, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 3, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 4, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 5, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 6, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

Explanation

The build/pro template is the build configuration for Pigsty professional edition offline packages, containing more content than the open-source edition.

Differences from OSS Edition:

  • Includes all six major PostgreSQL versions 13-18
  • Includes additional commercial/enterprise components: Kafka, PolarDB, IvorySQL, etc.
  • Includes Java runtime and Sealos tools
  • Output directory is dist/${version}/pro/

Build Contents:

  • PostgreSQL 13, 14, 15, 16, 17, 18 all versions
  • All categorized extension packages for each version
  • Kafka message queue
  • PolarDB and IvorySQL kernels
  • TigerBeetle distributed database
  • Sealos container platform

Use Cases:

  • Enterprise customers requiring multi-version support
  • Need for Oracle/MySQL compatible kernels
  • Need for Kafka message queue integration
  • Long-term support versions (LTS) requirements

Build Process:

# 1. Prepare build environment
cp conf/build/pro.yml pigsty.yml

# 2. Download packages on each node
./infra.yml -t repo_build

# 3. Package offline installation files
make cache-pro