This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Configuration

Batteries-included config templates for specific scenarios, and detailed explanations.

1 - Overview

Batteries-included config templates for specific scenarios, and detailed explanations.

Single Node

meta: Default single-node installation template with extensive configuration parameter descriptions.

rich: Downloads all available PostgreSQL extensions and Docker, provisioning a series of databases for software backups.

pitr: A single-node configuration sample for continuous backups and Point-in-Time Recovery (PITR) using remote object storage.

demo: Configuration file used by the Pigsty demo site, configured to serve publicly with domain names and certificates.

supa: Uses Docker Compose to start Supabase on a local single node PostgreSQL instance

bare: The minimal single-node config template, with only the essential components required to run a PostgreSQL instance.


Four Node

full: A 4-node standard sandbox demonstration environment, featuring two PostgreSQL clusters, MinIO, Etcd, Redis, and a sample FerretDB cluster.

safe: A 4-node template with hardened security, delayed cluster, following high-standard security best practices.

mssql: Replaces PostgreSQL with a Microsoft SQL Server-compatible kernel using WiltonDB or Babelfish.

polar: Substitutes native PostgreSQL with Alibaba Cloud’s PolarDB for PostgreSQL kernel.

ivory: Replaces native PostgreSQL with IvorySQL, an Oracle-compatible kernel by HighGo.

minio: 4-node HA MinIO MNMD cluster to provide S3-compatible object storage services.


Other Specs

dual 2-node template to set up a basic high-availability PostgreSQL cluster with master-slave replication, tolerating the failure of one node.

slim 2-node template for minimal installation that avoids setting up local software repo and infra module, with the etcd as only deps.

trio: Three-node configuration template providing a standard High Availability (HA) architecture, tolerating failure of one out of three nodes.

oss: 5-node batch building template for 5 major OS distro in one batch

ext: 5-node extension building template for 5 major OS distro

prod: 36-node production-simulation template, testing different scenarios and multiple pg major versions in the same time.




2 - 1-node: meta

Default single-node installation template with extensive configuration parameter descriptions.

The meta configuration template is Pigsty’s default template, designed to fulfill Pigsty’s core functionality—deploying PostgreSQL—on a single node.

To maximize compatibility, meta installs only the minimum required software set to ensure it runs across all operating system distributions and architectures.


Overview

  • Conf Name: meta
  • Node Count: 1-node, pigsty/vagrant/spec/meta.rb
  • Description: Default single-node installation template with extensive configuration parameter descriptions.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: rich, pitr, demo

Usage: This is the default config template, so there’s not need to specify -c meta explicitly during configure:

./configure [-i <primary_ip>]

Content

Source: pigsty/conf/meta.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    #----------------------------------#
    # etcd cluster for HA postgres DCS
    #----------------------------------#
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    #----------------------------------#
    # minio (OPTIONAL backup repo)
    #----------------------------------#
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql (singleton on current node)
    #----------------------------------#
    # this is an example single-node postgres cluster with postgis & timescaledb installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta                 # required identity parameter, usually same as group name

        # define business databases here: https://pigsty.io/docs/pgsql/db/
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, postgres by default
            #template: template1            # optional, which template to use, template1 by default
            #encoding: UTF8                 # optional, database encoding, UTF8 by default. (MUST same as template database)
            #locale: C                      # optional, database locale, C by default.  (MUST same as template database)
            #lc_collate: C                  # optional, database collate, C by default. (MUST same as template database)
            #lc_ctype: C                    # optional, database ctype, C by default.   (MUST same as template database)
            #tablespace: pg_default         # optional, default tablespace, 'pg_default' by default.
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }  # define another database

        # define business users here: https://pigsty.io/docs/pgsql/user/
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            superuser: false                # optional, is superuser? false by default
            createdb: false                 # optional, can create database? false by default
            createrole: false               # optional, can create role? false by default
            inherit: true                   # optional, can this role use inherited privileges? true by default
            replication: false              # optional, can this role do replication? false by default
            bypassrls: false                # optional, can this role bypass row level security? false by default
            pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            connlimit: -1                   # optional, user connection limit, default -1 disable limit
            expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            comment: pigsty admin user      # optional, comment string for this user/role
            roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }

        # define pg extensions: https://pigsty.io/docs/pgext/
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        pg_extensions: [ pgvector ] # available extensions: https://ext.pigsty.io/#/list

        # define HBA rules here: https://pigsty.io/docs/pgsql/hba/#define-hba
        pg_hba_rules:                       # example hba rules
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        #pg_vip_enabled: true               # define a L2 VIP which bind to cluster primary instance
        #pg_vip_address: 10.10.10.2/24      # L2 VIP Address and netmask
        #pg_vip_interface: eth1             # L2 VIP Network interface, overwrite on host vars if member have different network interface names
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      #minio        : { domain: sss.pigsty  ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    #pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    #minio_users:                      # and configure `pgbackrest_repo` & `minio_users` accordingly
    #  - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
    #  - { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }
    #pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
    #  minio: ...                      # optional minio repo for pgbackrest ...
    #    s3_key: pgbackrest            # minio user access key for pgbackrest
    #    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    #pgbackrest_method: minio
    #node_etc_hosts: [ '10.10.10.10 h.pigsty a.pigsty p.pigsty g.pigsty sss.pigsty' ]

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty

    #----------------------------------#
    # Safe Guard
    #----------------------------------#
    # you can enable these flags after bootstrap, to prevent purging running etcd / pgsql instances
    etcd_safeguard: false             # prevent purging running etcd instance?
    pg_safeguard: false               # prevent purging running postgres instance? false by default

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    # if you wish to customize your own repo, change these settings:
    repo_modules: infra,node,pgsql
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveats

To maintain compatibility across operating systems and architectures, the meta template installs only the minimum required software. This is reflected in the choices for repo_packages and repo_extra_packages:

  • Docker is not downloaded by default.
  • Only essential PostgreSQL extensions—pg_repack, wal2json, and pgvector—are downloaded by default.
  • Tools classified under pgsql-utility but not part of pgsql-common—such as pg_activity, pg_timetable, pgFormatter, pg_filedump, pgxnclient, timescaledb-tools, pgcopydb, and pgloader—are not downloaded.

3 - 1-node: rich

Downloads all available PostgreSQL extensions and Docker, provisioning a series of databases for software backups.

The rich template is a 1-node conf aims for running various business software that uses PostgreSQL databases.

If you want to run some business software that uses PostgreSQL databases as the underlying database on a single machine through Docker, such as Odoo, Gitea, Wiki.js, etc., you can consider using this template.


Overview

  • Conf Name: rich
  • Node Count: 1-node, pigsty/vagrant/spec/meta.rb
  • Description: Downloads all available PostgreSQL extensions and Docker, provisioning a series of databases for software backups. in addition to meta
  • Content: pigsty/conf/rich.yml
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64
  • Related: meta

The rich configuration template is designed for single-node deployments. Built upon meta, it downloads all available PostgreSQL extensions and Docker and preconfigures a set of databases to provide an out-of-the-box environment for software integrations.

Compared to meta, this configuration:

  • Downloads Docker packages (docker-ce, docker-compose-plugin).
  • Downloads all available PostgreSQL 17 extensions locally.
  • Sets up six additional dedicated databases and corresponding users within the default pg-meta PostgreSQL cluster for application-specific use.

To enable: Use the -c rich parameter during the configure process:

./configure -c rich [-i <primary_ip>]

Content

Source: pigsty/conf/rich.yml

all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
          - {name: dbuser_odoo     ,password: DBUser.Odoo     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for odoo service ,createdb: true} #,superuser: true}
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector},{name: postgis},{name: timescaledb}]}
          - {name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database  }
          - {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - {name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong api gateway database }
          - {name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - {name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database  }
          - {name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database     }
          - {name: odoo     ,owner: dbuser_odoo     ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_libs: 'timescaledb,pg_stat_statements, auto_explain'  # add timescaledb to shared_preload_libraries
        pg_extensions:
          - postgis, pgvector, timescaledb
        node_crontab:  # make one full backup 1 am everyday
          - '00 01 * * * postgres /pg/bin/pg-backup full'

    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    # To install & enable docker: ./docker.yml -l docker
    docker:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:
        docker_enabled: true
        #docker_registry_mirrors: ['https://docker.xxxxx.io']   # add your docker mirror/proxy if needed

  vars:                               # global variables
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      minio        : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }
      dify         : { domain: dify.pigsty ,endpoint: "10.10.10.10:8001", websocket: true }
      odoo         : { domain: odoo.pigsty, endpoint: "127.0.0.1:8069"  , websocket: true }
    nginx_navbar:                    # application nav links on home page
      - { name: PgAdmin4   , url : 'http://adm.pigsty'  , comment: 'PgAdmin4 for PostgreSQL'  }
      - { name: PGWeb      , url : 'http://cli.pigsty'  , comment: 'PGWEB Browser Client'     }
      - { name: ByteBase   , url : 'http://ddl.pigsty'  , comment: 'ByteBase Schema Migrator' }
      - { name: PostgREST  , url : 'http://api.pigsty'  , comment: 'Kong API Gateway'         }
      - { name: Gitea      , url : 'http://git.pigsty'  , comment: 'Gitea Git Service'        }
      - { name: Minio      , url : 'https://m.pigsty'   , comment: 'Minio Object Storage'     }
      - { name: Wiki       , url : 'http://wiki.pigsty' , comment: 'Local Wikipedia'          }
      - { name: Noco       , url : 'http://noco.pigsty' , comment: 'Nocodb Example'           }
      - { name: Odoo       , url : 'http://odoo.pigsty' , comment: 'Odoo - the OpenERP'       }
      - { name: Explain    , url : '/pigsty/pev.html'   , comment: 'pgsql explain visualizer' }
      - { name: Package    , url : '/pigsty'            , comment: 'local yum repo packages'  }
      - { name: PG Logs    , url : '/logs'              , comment: 'postgres raw csv logs'    }
      - { name: Schemas    , url : '/schema'            , comment: 'schemaspy summary report' }
      - { name: Reports    , url : '/report'            , comment: 'pgbadger summary report'  }

    node_etc_hosts: [ "${admin_ip} sss.pigsty" ]
    dns_records: [ "${admin_ip} api.pigsty adm.pigsty cli.pigsty ddl.pigsty lab.pigsty git.pigsty wiki.pigsty noco.pigsty supa.pigsty dify.pigsty odoo.pigsty" ]
    pgbackrest_method: minio          # use minio as backup repo instead of 'local'

    # download docker and pg17 extensions
    repo_modules: infra,node,pgsql,docker
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, docker ]
    repo_extra_packages: [            # download kernel & all available extensions
      pgsql-main,                     # lots of extensions are not available on aarch64 yet, so comment the line below on aarch64
      pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl
    ]

Caveat

Beware that not all extension plugins are available on the aarch64 (arm64) architecture. Therefore, exercise caution when adding the extensions you need when using the ARM architecture.

To replace extensions, refer to the extension alias list: https://ext.pigsty.io and replace a series of wildcard packages such as pg17-core,pg17-time,....

4 - 1-node: demo

Configuration file used by the Pigsty demo site, configured to serve publicly with domain names and certificates.

The demo is a configuration template used by the Pigsty demo site. It is configured to serve publicly with domain names and certificates.

If you wish to host a website on your cloud server, you can refer to this configuration template. It will illustrate how to expose your website to the public, configure SSL certificates, and install all the necessary extensions.


Overview

  • Conf Name: demo
  • Node Count: 1-node, pigsty/vagrant/spec/meta.rb
  • Description: Configuration file used by the Pigsty demo site, configured to serve publicly with domain names and certificates.
  • Content: pigsty/conf/demo.yml
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64
  • Related: meta, rich

This template uses a single-node deployment and enhances the meta configuration template as follows:

  • Downloads Docker packages (docker-ce, docker-compose-plugin) when building the local software repository.
  • Downloads all available PostgreSQL 17 extensions for the current x86_64 OS distribution when building the local software repository.
  • Installs all downloaded PostgreSQL 17 extensions in the default pg-meta cluster.
  • Explicitly specifies the node’s time zone and uses China’s NTP servers.
  • Deploys MinIO but does not use it, saving storage in the demo environment.
  • Pre-configures a set of PG business databases and users for out-of-the-box use with Docker software templates.
  • Adds three miniature standalone master-slave instances of Redis.
  • Adds a Mongo-compatible cluster based on FerretDB.
  • Adds a sample Kafka cluster.

To enable: Use the -c demo parameter during the configure process:

./configure -c demo [-i <primary_ip>]

Content

Source: pigsty/conf/demo.yml

all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:
        nodename: pigsty.cc       # overwrite the default hostname
        node_id_from_pg: false    # do not use the pg identity as hostname
        docker_enabled: true      # enable docker on this node
        docker_registry_mirrors: ["https://mirror.ccs.tencentyun.com"] # use tencent cloud docker mirror

        # ./pgsql-monitor.yml -l infra     # monitor 'external' PostgreSQL instance
        pg_exporters:             # treat local postgres as RDS for demonstration purpose
          20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 }
          #20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.11 , pg_port: 5432 }
          #20003: { pg_cluster: pg-bar, pg_seq: 2, pg_host: 10.10.10.12 , pg_exporter_url: 'postgres://dbuser_monitor:[email protected]:5432/postgres?sslmode=disable' }
          #20004: { pg_cluster: pg-bar, pg_seq: 3, pg_host: 10.10.10.13 , pg_monitor_username: dbuser_monitor, pg_monitor_password: DBUser.Monitor }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
          - {name: dbuser_odoo     ,password: DBUser.Odoo     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for odoo service ,createdb: true} #,superuser: true}
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector},{name: postgis},{name: timescaledb}]}
          - {name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database  }
          - {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - {name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong api gateway database }
          - {name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - {name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database  }
          - {name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database     }
          - {name: odoo     ,owner: dbuser_odoo     ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_libs: 'timescaledb,pg_stat_statements, auto_explain'  # add timescaledb to shared_preload_libraries
        pg_extensions: # extensions to be installed on this cluster
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud q3c geoip pg_polyline pg_geohash #pg_h3 #ogr_fdw #mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart plproxy #citus #hydra #pg_mooncake #pg_partman #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh pljava #plprql #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl timestamp9 chkpass #debversion #pg_rrule
          - topn pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_readonly safeupdate pg_permissions pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pg_checksums #pgautofailover #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers multicorn mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw redis_fdw pg_redis_pubsub hdfs_fdw firebird_fdw aws_s3 log_fdw #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #mongo_fdw #kafka_fdw
          - orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload

    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' }, 6381: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    # ./mongo.yml -l pg-mongo
    pg-mongo:
      hosts: { 10.10.10.10: { mongo_seq: 1 } }
      vars:
        mongo_cluster: pg-mongo
        mongo_pgurl: 'postgres://dbuser_meta:[email protected]:5432/grafana'

    # ./kafka.yml -l kf-main
    kf-main:
      hosts: { 10.10.10.10: { kafka_seq: 1, kafka_role: controller } }
      vars:
        kafka_cluster: kf-main
        kafka_peer_port: 29093 # 9093 is occupied by alertmanager


  vars:                               # global variables
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe
    infra_portal:                     # domain names and upstream servers
      home         : { domain: home.pigsty.cc }
      cc           : { domain: pigsty.cc      ,path:     "/www/pigsty.cc"   ,cert: /etc/cert/pigsty.cc.crt ,key: /etc/cert/pigsty.cc.key }
      grafana      : { domain: demo.pigsty.cc ,endpoint: "${admin_ip}:3000" ,websocket: true ,cert: /etc/cert/demo.pigsty.cc.crt ,key: /etc/cert/demo.pigsty.cc.key }
      prometheus   : { domain: p.pigsty.cc    ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty.cc    ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      minio        : { domain: m.pigsty.cc    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty.cc  ,endpoint: "127.0.0.1:8884"   }
      pgadmin      : { domain: adm.pigsty.cc  ,endpoint: "127.0.0.1:8885"   }
      pgweb        : { domain: cli.pigsty.cc  ,endpoint: "127.0.0.1:8886"   }
      bytebase     : { domain: ddl.pigsty.cc  ,endpoint: "127.0.0.1:8887"   }
      jupyter      : { domain: lab.pigsty.cc  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty.cc  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty.cc ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty.cc ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty.cc ,endpoint: "10.10.10.10:8000", websocket: true }
      dify         : { domain: dify.pigsty.cc ,endpoint: "10.10.10.10:8001", websocket: true }
      odoo         : { domain: odoo.pigsty.cc, endpoint: "127.0.0.1:8069"  , websocket: true }
    # scp -r ~/pgsty/cc/cert/*       pj:/etc/cert/       # copy https certs
    # scp -r ~/dev/pigsty.cc/public  pj:/www/pigsty.cc   # copy pigsty.cc website

    nginx_navbar:                     # application nav links on home page
      - { name: PgAdmin4   , url: 'http://adm.pigsty.cc'           , comment: 'PgAdmin4 for PostgreSQL'     }
      - { name: PGWeb      , url: 'http://cli.pigsty.cc'           , comment: 'PGWEB Browser Client'        }
      - { name: Jupyter    , url: 'http://lab.pigsty.cc'           , comment: 'Jupyter Notebook WebUI'      }
      - { name: ByteBase   , url: 'http://ddl.pigsty.cc'           , comment: 'ByteBase Schema Migrator'    }
      - { name: PostgREST  , url: 'http://api.pigsty.cc'           , comment: 'Kong API Gateway'            }
      - { name: Gitea      , url: 'http://git.pigsty.cc'           , comment: 'Gitea Git Service'           }
      - { name: Minio      , url: 'http://sss.pigsty.cc'           , comment: 'Minio Object Storage'        }
      - { name: Wiki       , url: 'http://wiki.pigsty.cc'          , comment: 'Local Wikipedia'             }
      - { name: Nocodb     , url: 'http://noco.pigsty.cc'          , comment: 'Nocodb Example'              }
      - { name: Odoo       , url: 'http://odoo.pigsty.cc'          , comment: 'Odoo - the OpenERP'          }
      - { name: Dify       , url: 'http://dify.pigsty.cc'          , comment: 'Dify - the LLM OPS'          }
      - { name: Explain    , url: '/pigsty/pev.html'               , comment: 'postgres explain visualizer' }
      - { name: Package    , url: '/pigsty'                        , comment: 'local yum repo packages'     }
      - { name: PG Logs    , url: '/logs'                          , comment: 'postgres raw csv logs'       }
      - { name: Schemas    , url: '/schema'                        , comment: 'schemaspy summary report'    }
      - { name: Reports    , url: '/report'                        , comment: 'pgbadger summary report'     }
      - { name: ISD        , url: '${grafana}/d/isd-overview'      , comment: 'noaa isd data visualization' }
      - { name: Covid      , url: '${grafana}/d/covid-overview'    , comment: 'covid data visualization'    }
      - { name: Worktime   , url: '${grafana}/d/worktime-overview' , comment: 'worktime query'              }
      - { name: DBTrend    , url: '${grafana}/d/dbeng-trending'    , comment: 'DB Engine Trending Graph'    }

    node_etc_hosts: [ "${admin_ip} sss.pigsty" ]
    node_timezone: Asia/Hong_Kong
    node_ntp_servers:
      - pool cn.pool.ntp.org iburst
      - pool ${admin_ip} iburst       # assume non-admin nodes does not have internet access
    pgbackrest_enabled: false         # do not take backups since this is disposable demo env

    # download docker and pg17 extensions
    repo_modules: infra,node,pgsql,docker
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, docker ]
    repo_extra_packages: [            # download pg17 kernel & all available extensions (x86_64)
      pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl
    ]

Caveat

Beware that not all extension plugins are available on the aarch64 (arm64) architecture, so be careful when adding the extensions you need when using the ARM architecture.

  • Check the https://ext.pigsty.io for extension alias list, replace pg17-core,pg17-time,... with a series of wildcard software packages.

5 - 1-node: pitr

A single-node configuration sample for continuous backups and PITR using remote S3/object storage.

The pitr template demonstrates how to use object storage for continuous backups and Point-in-Time Recovery (PITR)in the cloud with a single EC2/ECS server.


Overview

To enable: Use the -c pitr parameter during the configure process:

./configure -c pitr [-i <primary_ip>]

Content

Source: pigsty/conf/pitr.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    #----------------------------------#
    # etcd cluster for HA postgres DCS
    #----------------------------------#
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    #----------------------------------#
    # pgsql (singleton on current node)
    #----------------------------------#
    # this is an example single-node postgres cluster with postgis & timescaledb installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta                 # required identity parameter, usually same as group name

        # define business databases here: https://pigsty.io/docs/pgsql/db/
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, postgres by default
            #template: template1            # optional, which template to use, template1 by default
            #encoding: UTF8                 # optional, database encoding, UTF8 by default. (MUST same as template database)
            #locale: C                      # optional, database locale, C by default.  (MUST same as template database)
            #lc_collate: C                  # optional, database collate, C by default. (MUST same as template database)
            #lc_ctype: C                    # optional, database ctype, C by default.   (MUST same as template database)
            #tablespace: pg_default         # optional, default tablespace, 'pg_default' by default.
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }  # define another database

        # define business users here: https://pigsty.io/docs/pgsql/user/
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            superuser: false                # optional, is superuser? false by default
            createdb: false                 # optional, can create database? false by default
            createrole: false               # optional, can create role? false by default
            inherit: true                   # optional, can this role use inherited privileges? true by default
            replication: false              # optional, can this role do replication? false by default
            bypassrls: false                # optional, can this role bypass row level security? false by default
            pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            connlimit: -1                   # optional, user connection limit, default -1 disable limit
            expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            comment: pigsty admin user      # optional, comment string for this user/role
            roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }

        # define pg extensions: https://pigsty.io/docs/pgext/
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        pg_extensions: [ pgvector ] # available extensions: https://ext.pigsty.io/#/list

        # define HBA rules here: https://pigsty.io/docs/pgsql/hba/#define-hba
        pg_hba_rules:                       # example hba rules
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    # ADD YOUR AK/SK/REGION/ENDPOINT HERE
    pgbackrest_method: s3             # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      s3:                             # aliyun oss (s3 compatible) object storage service
        type: s3                      # oss is s3-compatible
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: PG.${pg_cluster} # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty

    #----------------------------------#
    # Safe Guard
    #----------------------------------#
    # you can enable these flags after bootstrap, to prevent purging running etcd / pgsql instances
    etcd_safeguard: false             # prevent purging running etcd instance?
    pg_safeguard: false               # prevent purging running postgres instance? false by default

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    # if you wish to customize your own repo, change these settings:
    repo_modules: infra,node,pgsql
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveats

You have to fill the bucket credentials in the pgbackrest_repo.

6 - 1-node: supa

Run self-hosting supabase on existing PostgreSQL

The supa template aims to self-hosting a single node supabase instance.

Check the Tutorial: Self-hosting Supabase for more details.


Overview

  • Conf Name: supa
  • Node Count: 1-node, pigsty/vagrant/spec/meta.rb
  • Description: Default single-node installation template with extensive configuration parameter descriptions.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64
  • Related: meta

To enable: Use the -c supa parameter during the configure process:

./configure -c supa [-i <primary_ip>]

You can also use -v to specify the PostgreSQL major version to install, currently recommended PostgreSQL 17, 16, 15 for supabase self-hosting.


Content

Source: pigsty/conf/supa.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # pg-meta, the underlying postgres database for supabase
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          # supabase roles: anon, authenticated, dashboard_user
          - { name: anon           ,login: false }
          - { name: authenticated  ,login: false }
          - { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
          - { name: service_role   ,login: false ,bypassrls: true }
          # supabase users: please use the same password
          - { name: supabase_admin             ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true   ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
          - { name: authenticator              ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
          - { name: supabase_auth_admin        ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_storage_admin     ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
          - { name: supabase_functions_admin   ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
          - { name: supabase_read_only_user    ,password: 'DBUser.Supa' ,bypassrls: true ,roles: [ dbrole_readonly, pg_read_all_data ] }
        pg_databases:
          - name: postgres
            baseline: supabase.sql
            owner: supabase_admin
            comment: supabase postgres database
            schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
            extensions:
              - { name: pgcrypto  ,schema: extensions  } # 1.3   : cryptographic functions
              - { name: pg_net    ,schema: extensions  } # 0.9.2 : async HTTP
              - { name: pgjwt     ,schema: extensions  } # 0.2.0 : json web token API for postgres
              - { name: uuid-ossp ,schema: extensions  } # 1.1   : generate universally unique identifiers (UUIDs)
              - { name: pgsodium        }                # 3.1.9 : pgsodium is a modern cryptography library for Postgres.
              - { name: supabase_vault  }                # 0.2.8 : Supabase Vault Extension
              - { name: pg_graphql      }                # 1.5.9 : pg_graphql: GraphQL support
              - { name: pg_jsonschema   }                # 0.3.3 : pg_jsonschema: Validate json schema
              - { name: wrappers        }                # 0.4.3 : wrappers: FDW collections
              - { name: http            }                # 1.6   : http: allows web page retrieval inside the database.
              - { name: pg_cron         }                # 1.6   : pg_cron: Job scheduler for PostgreSQL
              - { name: timescaledb     }                # 2.17  : timescaledb: Enables scalable inserts and complex queries for time-series data
              - { name: pg_tle          }                # 1.2   : pg_tle: Trusted Language Extensions for PostgreSQL
              - { name: vector          }                # 0.8.0 : pgvector: the vector similarity search
        # supabase required extensions
        pg_libs: 'pg_stat_statements, plpgsql, plpgsql_check, pg_cron, pg_net, timescaledb, auto_explain, pg_tle, plan_filter'
        pg_extensions: # extensions to be installed on this cluster
          - supabase   # essential extensions for supabase
          - timescaledb postgis pg_graphql pg_jsonschema wrappers pg_search pg_analytics pg_parquet plv8 duckdb_fdw pg_cron pg_timetable pgqr
          - supautils pg_plan_filter passwordcheck plpgsql_check pgaudit pgsodium pg_vault pgjwt pg_ecdsa pg_session_jwt index_advisor
          - pgvector pgvectorscale pg_summarize pg_tiktoken pg_tle pg_stat_monitor hypopg pg_hint_plan pg_http pg_net pg_smtp_client pg_idkit
        pg_parameters:
          cron.database_name: postgres
          pgsodium.enable_event_trigger: off
        pg_hba_rules: # supabase hba rules, require access from docker network
          - { user: all ,db: postgres  ,addr: intra         ,auth: pwd ,title: 'allow supabase access from intranet'    }
          - { user: all ,db: postgres  ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

    # launch supabase stateless part with docker compose: ./supabase.yml
    supabase:
      hosts:
        10.10.10.10: { supa_seq: 1 }  # instance id
      vars:
        supa_cluster: supa            # cluster name
        docker_enabled: true          # enable docker

        # use these to pull docker images via proxy and mirror registries
        #docker_registry_mirrors: ['https://docker.xxxxx.io']
        #proxy_env:   # add [OPTIONAL] proxy env to /etc/docker/daemon.json configuration file
        #  no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
        #  #all_proxy: http://user:pass@host:port

        # these configuration entries will OVERWRITE or APPEND to /opt/supabase/.env file (src template: app/supabase/.env)
        # check https://github.com/Vonng/pigsty/blob/main/app/supabase/.env for default values
        supa_config:

          # IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
          # https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
          jwt_secret: your-super-secret-jwt-token-with-at-least-32-characters-long
          anon_key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
          service_role_key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
          dashboard_username: supabase
          dashboard_password: pigsty

          # postgres connection string (use the correct ip and port)
          postgres_host: 10.10.10.10
          postgres_port: 5436             # access via the 'default' service, which always route to the primary postgres
          postgres_db: postgres
          postgres_password: DBUser.Supa  # password for supabase_admin and multiple supabase users

          # expose supabase via domain name
          site_url: http://supa.pigsty
          api_external_url: http://supa.pigsty
          supabase_public_url: http://supa.pigsty

          # if using s3/minio as file storage
          s3_bucket: supa
          s3_endpoint: https://sss.pigsty:9000
          s3_access_key: supabase
          s3_secret_key: S3User.Supabase
          s3_force_path_style: true
          s3_protocol: https
          s3_region: stub
          minio_domain_ip: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

          # if using SMTP (optional)
          #smtp_admin_email: [email protected]
          #smtp_host: supabase-mail
          #smtp_port: 2500
          #smtp_user: fake_mail_user
          #smtp_pass: fake_mail_password
          #smtp_sender_name: fake_sender
          #enable_anonymous_users: false



  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      minio        : { domain: m.pigsty ,endpoint: "10.10.10.10:9001", https: true, websocket: true }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }  # expose supa studio UI and API via nginx
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }

    #----------------------------------#
    # Credential: CHANGE THESE PASSWORDS
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: pigsty
    #pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor
    #pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    #patroni_username: postgres
    patroni_password: Patroni.API
    #haproxy_admin_username: admin
    haproxy_admin_password: pigsty

    # use minio as supabase file storage, single node single driver mode for demonstration purpose
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: minioadmin      # root secret key, `minioadmin` by default
    minio_buckets: [ { name: pgsql }, { name: supa } ]
    minio_users:
      - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: S3User.Backup,   policy: readwrite }
      - { access_key: supabase   , secret_key: S3User.Supabase, policy: readwrite }
    minio_endpoint: https://sss.pigsty:9000    # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.10 sss.pigsty"] # domain name to access minio from all nodes (required)

    # use minio as default backup repo for PostgreSQL
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /pg/cert/ca.crt  # minio ca file path, `/pg/cert/ca.crt` by default
        bundle: y                     # bundle small files into a single file
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    # download docker and supabase related extensions
    pg_version: 17
    repo_modules: node,pgsql,infra,docker
    repo_packages: [node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, docker ]
    repo_extra_packages:
      - pgsql-main
      - supabase   # essential extensions for supabase
      - timescaledb postgis pg_graphql pg_jsonschema wrappers pg_search pg_analytics pg_parquet plv8 duckdb_fdw pg_cron pg_timetable pgqr
      - supautils pg_plan_filter passwordcheck plpgsql_check pgaudit pgsodium pg_vault pgjwt pg_ecdsa pg_session_jwt index_advisor
      - pgvector pgvectorscale pg_summarize pg_tiktoken pg_tle pg_stat_monitor hypopg pg_hint_plan pg_http pg_net pg_smtp_client pg_idkit

7 - 1-node: bare

The minimal single-node config template, with only the essential components required to run a PostgreSQL instance.

The bare is an extremely minimal config template that only includes the essential components required to run a PostgreSQL instance.

This configuration is suitable for users who want to deploy a single-node PostgreSQL instance with the least amount of resources and dependencies. Any configuration template with fewer components than this will not work properly.


Overview

To enable: Use the -c bare parameter during the configure process:

./configure -c bare [-i <primary_ip>]

Content

Source: pigsty/conf/demo/bare.yml

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
  vars:
    version: v3.1.0
    admin_ip: 10.10.10.10
    region: default

8 - 4-node: full

A four-node standard sandbox demonstration environment, featuring two PostgreSQL clusters, MinIO, Etcd, Redis, and a sample FerretDB cluster.

The full template is the sandbox template, recommended by Pigsty. It uses 4 nodes, deploys two PostgreSQL clusters, and can be used to test and demonstrate Pigsty’s capabilities.

Most tutorials and examples are based on this template. sandbox environment.


Overview

  • Conf Name: full
  • Node Count: 4-node, pigsty/vagrant/spec/full.rb
  • Description: A four-node standard sandbox demonstration environment, featuring two PostgreSQL clusters, MinIO, Etcd, Redis, and a sample FerretDB cluster.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: rich, pitr, demo

To enable: Use the -c full parameter during the configure process:

./configure -c full

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/full.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } }, vars: { docker_enabled: true } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta     ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # pgsql 3 node ha cluster: pg-test
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_modules: infra,node,pgsql
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveat

9 - 4-node: safe

A 3+1 node template with hardened security, delayed cluster, following high-standard security best practices.

The safe is a specialized configuration template for security hardening, based on the full template.


Overview

  • Conf Name : safe
  • Node Count : 4-node, pigsty/vagrant/spec/full.rb
  • Description : A 3+1 node template with hardened security, delayed cluster, following high-standard security best practices.
  • Content : pigsty/conf/safe.yml
  • OS Distro : el7, el8, el9, u20, u22, u24
  • OS Arch : x86_64
  • Related : full

To enable: Use the -c safe parameter during the configure process:

./configure -c safe

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/safe.yml

#==============================================================#
# File      :   safe.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2024-11-12
# Docs      :   https://pigsty.io/docs/conf/safe
# Author    :   Ruohang Feng ([email protected])
# License   :   AGPLv3
#==============================================================#


#===== SECURITY ENHANCEMENT CONFIG TEMPLATE WITH 3 NODES ======#
#   * 3 infra nodes, 3 etcd nodes, single minio node
#   * 3-instance pgsql cluster with an extra delayed instance
#   * crit.yml templates, no data loss, checksum enforced
#   * enforce ssl on postgres & pgbouncer, use postgres by default
#   * enforce an expiration date for all users (20 years by default)
#   * enforce strong password policy with passwordcheck extension
#   * enforce changing default password for all users
#   * log connections and disconnections
#   * restrict listen ip address for postgres/patroni/pgbouncer


all:
  children:

    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars: { patroni_watchdog_mode: off }

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_conf: crit.yml
        pg_users:
          - { name: dbuser_meta , password: Pleas3-ChangeThisPwd ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: Make.3ure-Compl1ance  ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_services:
          - { name: standby , ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
        pg_listen: '${ip},${vip},${lo}'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # OPTIONAL delayed cluster for pg-meta
    pg-meta-delay: # delayed instance for pg-meta (1 hour ago)
      hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
      vars: { pg_cluster: pg-meta-delay }


  ####################################################################
  #                          Parameters                              #
  ####################################################################
  vars: # global variables
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    patroni_ssl_enabled: true         # secure patroni RestAPI communications with SSL?
    pgbouncer_sslmode: require        # pgbouncer client ssl mode: disable|allow|prefer|require|verify-ca|verify-full, disable by default
    pg_default_service_dest: postgres # default service destination to postgres instead of pgbouncer
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]

    #----------------------------------#
    # Credentials
    #----------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: You.Have2Use-A_VeryStrongPassword
    #pg_admin_username: dbuser_dba
    pg_admin_password: PessWorb.Should8eStrong-eNough
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: MekeSuerYour.PassWordI5secured
    #pg_replication_username: replicator
    pg_replication_password: doNotUseThis-PasswordFor.AnythingElse
    #patroni_username: postgres
    patroni_password: don.t-forget-to-change-thEs3-password
    #haproxy_admin_username: admin
    haproxy_admin_password: GneratePasswordWith-pwgen-s-16-1

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    minio_users: # and configure `pgbackrest_repo` & `minio_users` accordingly
      - { access_key: dba , secret_key: S3User.DBA.Strong.Password, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: Min10.bAckup ,policy: readwrite }
    pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local: # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio: # optional minio repo for pgbackrest
        s3_key: pgbackrest            # <-------- CHANGE THIS, SAME AS `minio_users` access_key
        s3_key_secret: Min10.bAckup   # <-------- CHANGE THIS, SAME AS `minio_users` secret_key
        cipher_pass: 'pgBR.${pg_cluster}'  # <-------- CHANGE THIS, you can use cluster name as part of password
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        bundle: y                     # bundle small files into a single file
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #----------------------------------#
    # Access Control
    #----------------------------------#
    # add passwordcheck extension to enforce strong password policy
    pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
    pg_extensions:
      - passwordcheck, supautils, pgsodium, pg_vault, pg_session_jwt, anonymizer, pgsmcrypto, pgauditlogtofile, pgaudit #, pgaudit17, pgaudit16, pgaudit15, pgaudit14
      - pg_auth_mon, credcheck, pgcryptokey, pg_jobmon, logerrors, login_hook, set_user, pgextwlist, pg_auditor, sslutils, noset #pg_tde #pg_snakeoil
    pg_default_roles: # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [ dbrole_readonly ]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [ pg_monitor, dbrole_readwrite ]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [ pg_monitor, dbrole_readonly ]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [ dbrole_admin ]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [ pg_monitor ] ,expire_in: 7300 ,pgbouncer: true ,parameters: { log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules: # postgres host-based auth rules by default
      - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'   }
      - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident'  }
      - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost' }
      - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet'  }
      - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet'  }
      - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password'  }
      - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password' }
      - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'    }
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'    }
      - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket' }
      - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'      }
      - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet' }
    pgb_default_hba_rules: # pgbouncer host-based authentication rules
      - { user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident' }
      - { user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd'  }
      - { user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd'  }
      - { user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr'  }
      - { user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'    }
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'    }
      - { user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd'  }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    # if you wish to customize your own repo, change these settings:
    repo_modules: infra,node,pgsql    # install upstream repo during repo bootstrap
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main, pgsql-sec,
      passwordcheck, supautils, pgsodium, pg_vault, pg_session_jwt, anonymizer, pgsmcrypto, pgauditlogtofile, pgaudit, #, pgaudit17, pgaudit16, pgaudit15, pgaudit14
      pg_auth_mon, credcheck, pgcryptokey, pg_jobmon, logerrors, login_hook, set_user, pgextwlist, pg_auditor, sslutils, noset #pg_tde #pg_snakeoil
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveat

Beware that not all security-related extensions are available in ARM64 environments, ARM system users should enable extensions at their discretion.

10 - 4-node: mssql

Replaces PostgreSQL with a Microsoft SQL Server-compatible kernel using WiltonDB or Babelfish.

The mssql config template is based on full template. Which use the WiltonDB / Babelfish kernel to replace the vanilla PostgreSQL kernel, providing Microsoft SQL Server wire protocol level compatibility.

Check Babelfish (MSSQL) kernel description for more details


Overview

  • Conf Name: mssql
  • Node Count: 4-node, pigsty/vagrant/spec/full.rb
  • Description: Replaces PostgreSQL with a Microsoft SQL Server-compatible kernel using WiltonDB or Babelfish.
  • Content: pigsty/conf/mssql.yml
  • OS Distro: el7, el8, el9, u20, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: full

Usage: configure with -c mssql:

./configure -c mssql

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/mssql.yml

all:
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    #----------------------------------#
    # etcd cluster for HA postgres DCS
    #----------------------------------#
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    #----------------------------------#
    # pgsql (singleton on current node)
    #----------------------------------#
    # this is an example single-node postgres cluster
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
      vars:
        pg_cluster: pg-meta
        pg_users:                           # create MSSQL superuser
          - {name: dbuser_mssql ,password: DBUser.MSSQL ,superuser: true, pgbouncer: true ,roles: [dbrole_admin], comment: superuser & owner for babelfish  }
        pg_databases:
          - name: mssql
            baseline: mssql.sql             # init babelfish database & user
            extensions:
              - { name: uuid-ossp          }
              - { name: babelfishpg_common }
              - { name: babelfishpg_tsql   }
              - { name: babelfishpg_tds    }
              - { name: babelfishpg_money  }
              - { name: pg_hint_plan       }
              - { name: system_stats       }
              - { name: tds_fdw            }
            owner: dbuser_mssql
            parameters: { 'babelfishpg_tsql.migration_mode' : 'multi-db' }
            comment: babelfish cluster, a MSSQL compatible pg cluster

    #----------------------------------#
    # pgsql (3-node pgsql/mssql cluster)
    #----------------------------------#
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica }
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-test
        pg_users:                           # create MSSQL superuser
          - {name: dbuser_mssql ,password: DBUser.MSSQL ,superuser: true, pgbouncer: true ,roles: [dbrole_admin], comment: superuser & owner for babelfish  }
        pg_primary_db: mssql                # use `mssql` as the primary sql server database
        pg_databases:
          - name: mssql
            baseline: mssql.sql             # init babelfish database & user
            extensions:
              - { name: uuid-ossp          }
              - { name: babelfishpg_common }
              - { name: babelfishpg_tsql   }
              - { name: babelfishpg_tds    }
              - { name: babelfishpg_money  }
              - { name: pg_hint_plan       }
              - { name: system_stats       }
              - { name: tds_fdw            }
            owner: dbuser_mssql
            parameters: { 'babelfishpg_tsql.migration_mode' : 'single-db' }
            comment: babelfish cluster, a MSSQL compatible pg cluster

  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

    #----------------------------------#
    # NODE, PGSQL, MSSQL
    #----------------------------------#
    pg_version: 15                     # The current WiltonDB major version is 15
    pg_packages:                       # install forked version of postgresql with babelfishpg support
      - wiltondb patroni pgbouncer pgbackrest pg_exporter pgbadger vip-manager
    pg_extensions: [ ]                 # do not install any vanilla postgresql extensions
    pg_mode: mssql                     # Microsoft SQL Server Compatible Mode
    pg_libs: 'babelfishpg_tds, pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
    pg_default_hba_rules: # overwrite default HBA rules for babelfish cluster
      - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident' }
      - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' }
      - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost' }
      - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' }
      - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' }
      - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' }
      - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password' }
      - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl' }
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd' }
      - { user: dbuser_mssql ,db: mssql       ,addr: intra     ,auth: md5   ,title: 'allow mssql dbsu intranet access' } # <--- use md5 auth method for mssql user
      - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket' }
      - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password' }
      - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet' }
    pg_default_services: # route primary & replica service to mssql port 1433
      - { name: primary ,port: 5433 ,dest: 1433  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: 1433  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]" }

    # download wiltondb instead of postgresql kernel
    repo_modules: node,pgsql,infra,mssql
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
    repo_extra_packages: [ wiltondb, sqlcmd ] # replace pgsql kernel with wiltondb/babelfish

Caveat

Beware that WiltonDB is only available on EL 7/8/9 and Ubuntu 20.04/22.04/24.04 systems, and Debian-based systems are not supported at the moment.

WiltonDB is compatible with PostgreSQL 15.

11 - 4-node: polar

Substitutes native PostgreSQL with Alibaba Cloud’s PolarDB for PostgreSQL kernel.

The polar template is based on the 4-node full template.

It replaces the native PostgreSQL kernel with Alibaba Cloud’s PolarDB for PostgreSQL, providing an “cloud-native” Aurora-like PostgreSQL experience.

Check PolarDB for PostgreSQL (POLAR) Kernel for details.


Overview

  • Conf Name: polar
  • Node Count: 4-node, pigsty/vagrant/spec/full.rb
  • Description: Substitutes native PostgreSQL with Alibaba Cloud’s PolarDB for PostgreSQL kernel.
  • OS Distro: el7, el8, el9, u20, u22, u24
  • OS Arch: x86_64
  • Related: full

To enable: Use the -c polar parameter during the configure process:

./configure -c polar

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/polar.yml

all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } }}

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # pgsql 3 node ha cluster: pg-test
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1

  vars:                               # global variables
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

    #----------------------------------#
    # NODE, PGSQL, PolarDB
    #----------------------------------#
    # THIS SPEC REQUIRE AN AVAILABLE POLARDB KERNEL IN THE LOCAL REPO!
    pg_version: 15
    pg_packages: [ 'polardb patroni pgbouncer pgbackrest pg_exporter pgbadger vip-manager' ]
    pg_extensions: [ ]                # do not install any vanilla postgresql extensions
    pg_mode: polar                    # polardb compatible mode
    pg_exporter_exclude_database: 'template0,template1,postgres,polardb_admin'
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator   ,superuser: true  ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator } # <- superuser is required for replication
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
    repo_extra_packages: [ polardb ] # replace vanilla postgres kernel with polardb kernel

Caveat

PolarDB for PostgreSQL does not have ARM architecture packages available at the moment.

It is currently only available on EL 7/8/9 and Ubuntu 20.04/22.04/24.04 systems, and Debian-based operating system support is not currently available.

The latest version of PolarDB is based on the PostgreSQL 15 fork.

12 - 4-node: ivory

Replaces native PostgreSQL with IvorySQL, an Oracle-compatible kernel by HighGo.

The ivory template is based on the full 4-node template, which replaces the native PostgreSQL kernel with IvorySQL, an Oracle-compatible kernel by HighGo.

Check the Kernel description: IvorySQL for details.


Overview

  • Conf Name: ivory
  • Node Count: 4-node, pigsty/vagrant/spec/full.rb
  • Description: Replaces native PostgreSQL with IvorySQL, an Oracle-compatible kernel by HighGo.
  • OS Distro: el7, el8, el9
  • OS Arch: x86_64, aarch64 (no el7.aarch64)
  • Related: full

To enable: Use the -c ivory parameter during the configure process:

./configure -c ivory

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/ivory.yml

all:

  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    #----------------------------------#
    # etcd cluster for HA postgres DCS
    #----------------------------------#
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    #----------------------------------#
    # pgsql (singleton on current node)
    #----------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: polardb database ,schemas: [ pigsty ] }]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica }
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1

  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

    #----------------------------------#
    # Ivory SQL Configuration
    #----------------------------------#
    pg_mode: ivory                    # IvorySQL Oracle Compatible Mode
    pg_version: 16                    # The current IvorySQL compatible major version is 16
    pg_packages: [ 'ivorysql patroni pgbouncer pgbackrest pg_exporter pgbadger vip-manager' ]
    pg_libs: 'liboracle_parser, pg_stat_statements, auto_explain'
    pgbackrest_enabled: false         # got checksum error when using oracle compatible mode

    repo_modules: node,pgsql,infra,ivory
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ] #docker
    repo_extra_packages: [ ivorysql ] # replace default postgresql kernel with ivroysql packages

Caveat

Beware that the IvorySQL kernel is only available on EL 7/8/9, and EL7 is no longer supported by the official developers.

IvorySQL provides RPM packages for x86_64 (amd64) and aarch64 (arm64), but EL7 does not support the aarch64 architecture.

Please note that the current version of IvorySQL is compatible with the PostgreSQL 16 major version.

13 - 4-node: minio

4-node HA MinIO MNMD cluster to provide S3-compatible object storage services.

The minio template is based on the full template.

It has a four-node x four-drive configuration that defines a 16-drive MinIO cluster.

Check the MINIO module docs for more information.


Overview

  • Conf Name : minio
  • Node Count : 4-node, pigsty/vagrant/spec/minio.rb
  • Description : 4-node HA MinIO MNMD cluster to provide S3-compatible object storage services.
  • Content : pigsty/conf/minio.yml
  • OS Distro : el7, el8, el9, d12, u20, u22, u24
  • OS Arch : x86_64, aarch64
  • Related : full

To enable: Use the -c minio parameter during the configure process:

./configure -c minio

This is a 4-node template, you need to modify the IP address of the other 3 nodes after configure


Content

Source: pigsty/conf/mssql.yml

all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # minio cluster with 4 nodes and 4 drivers per node
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
        10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
        10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
        10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
      vars:
        minio_cluster: minio
        minio_data: '/data{1...4}'
        minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
        minio_users:
          - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
          - { access_key: pgbackrest , secret_key: S3User.SomeNewPassWord , policy: readwrite }

        # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
        node_cluster: minio
        vip_enabled: true
        vip_vrid: 128
        vip_address: 10.10.10.9
        vip_interface: eth1

        # expose minio service with haproxy on all nodes
        haproxy_services:
          - name: minio                    # [REQUIRED] service name, unique
            port: 9002                     # [REQUIRED] service port, unique
            balance: leastconn             # [OPTIONAL] load balancer algorithm
            options:                       # [OPTIONAL] minio health check
              - option httpchk
              - option http-keep-alive
              - http-check send meth OPTIONS uri /minio/health/live
              - http-check expect status 200
            servers:
              - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

  vars:
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }

      # domain names to access minio web console via nginx web portal (optional)
      minio        : { domain: m.pigsty     ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio10      : { domain: m10.pigsty   ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio11      : { domain: m11.pigsty   ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
      minio12      : { domain: m12.pigsty   ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
      minio13      : { domain: m13.pigsty   ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }

    minio_endpoint: https://sss.pigsty:9002   # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)

Caveat

14 - 2-node: dual

Dual-node template to set up a basic high-availability PostgreSQL cluster with master-slave replication, tolerating the failure of one node.

This template illustrates a “half” high-availability PostgreSQL cluster with primary - replica replication, tolerating the failure of one specific node. It is a good choice if you only have two servers.


Overview

  • Conf Name: dual
  • Node Count: 2-node, pigsty/vagrant/spec/dual.rb
  • Description: Dual-node template to set up a basic high-availability PostgreSQL cluster with master-slave replication, tolerating the failure of one node.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64, aarch64
  • Related:

Description

To enable: Use the -c dual parameter during the configure process:

./configure -c dual [-i <primary_ip>]

you may have to replace another node ip placeholder 10.10.10.11


Content

Source: pigsty/conf/dual.yml

all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, optional backup repo for pgbackrest
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres cluster 'pg-meta' with single primary instance
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: replica }
        10.10.10.11: { pg_seq: 2, pg_role: primary }  # <----- use this as primary by default
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

  vars:                               # global parameters
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      #minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    # consider using local fs or external s3 service for cold backup storage in dual node configuration
    #pgbackrest_method: minio

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    # if you wish to customize your own repo, change these settings:
    repo_modules: infra,node,pgsql    # install upstream repo during repo bootstrap
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveat

Typically, in production environments, a complete high-availability deployment requires at least three nodes to ensure that the cluster can continue operating normally if any single server fails. This is because high-availability failure detection DCS (etcd)/Patroni requires participation from a majority of nodes, which cannot be met with just two nodes.

However, sometimes only two servers are available. In such cases, the dual template is a feasible option, assuming you have two servers:

  • Node A, 10.10.10.10: Defaults as the management node, running Infra infrastructure, single-node etcd, and a PostgreSQL standby.
  • Node B, 10.10.10.11: Serves only as the PostgreSQL primary.

In this scenario, the dual-node template allows node B to fail and will automatically switch over to node A after a failure. However, if node A fails (the entire node crashes), manual intervention is required. Nevertheless, if node A is not completely offline but only etcd or PostgreSQL itself has issues, the whole system can still continue to operate normally.

This template uses an L2 VIP to achieve high-availability access. If your network conditions do not allow the use of an L2 VIP (for example, in restricted cloud env or across switch broadcast domains), you may consider using DNS resolution or other access methods instead.

15 - 2-node: slim

Minimal installation that avoids setting up local software repositories or infrastructure, relying only on etcd for a highly available PostgreSQL cluster.

The slim template is a minimal installation example that avoids setting up local software repo or any infrastructure components, focusing on pure PostgreSQL HA cluster installation, relying only on etcd for a highly available PostgreSQL cluster.

If you need a simplest available database instance without deploying monitoring and dependencies, consider the slim install mode.


Overview

  • Conf Name: slim
  • Node Count: 2-node, pigsty/vagrant/spec/dual.rb
  • Description: Minimal installation that avoids setting up local software repositories or infrastructure, relying only on etcd for a highly available PostgreSQL cluster.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: dual

Description

To enable: Use the -c slim parameter during the configure process:

./configure -c slim [-i <primary_ip>]

Beware this is a two-node template, you need to replace the IP address of the other node after configuration.


Content

Source: pigsty/conf/slim.yml

all:
  children:

    # actually not used
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres, still required in minimal installation
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # postgres cluster 'pg-meta' with 2 instances
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am

  vars:                               # global parameters
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    # slim installation setup
    nginx_enabled: false              # nginx not exists
    dns_enabled: false                # dnsmasq not exists
    prometheus_enabled: false         # prometheus not exists
    grafana_enabled: false            # grafana not exists
    pg_exporter_enabled: false        # disable pg_exporter
    pgbouncer_exporter_enabled: false
    pg_vip_enabled: false

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    node_repo_modules: node,infra,pgsql # use node_repo_modules instead of repo_modules
    node_repo_remove: true            # remove existing node repo for node managed by pigsty?
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveat

There’s no monitoring or alerting in this mode, since there’s no infra module installed.

16 - 3-node: trio

Three-node configuration template providing a standard High Availability (HA) architecture, tolerating failure of one out of three nodes.

The 3-node configuration template is a standard HA architecture, tolerating failure of 1 out of 3 nodes. This configuration is the minimum spec for achieving true high availability. In this case, the DCS (etcd) can tolerate the failure of 1 node.

The INFRA, ETCD, and PGSQL core modules are all deployed with 3 nodes, allowing one node to fail.


Overview

  • Conf Name: trio
  • Node Count: 3-node / pigsty/vagrant/spec/trio.rb
  • Description: Three-node configuration template providing a standard High Availability (HA) architecture, tolerating failure of one out of three nodes.
  • OS Distro: el8, el9, d12, u22, u24
  • OS Arch: x86_64
  • Related: safe

Usage: configure with -c trio:

./configure -c trio

Note: This is a 3-node template, you may have to change the other two node IP address after generating the configuration.


Content

Source: pigsty/conf/trio.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars:
        patroni_watchdog_mode: off # do not fencing infra
        docker_enabled: true  # install with ./docker.yml

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: DBUser.View ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
      blackbox     : { endpoint: "${admin_ip}:9115" }
      loki         : { endpoint: "${admin_ip}:3100" }
      #minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    # if you wish to customize your own repo, change these settings:
    repo_modules: infra,node,pgsql    # install upstream repo during repo bootstrap
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_modules: local          # install the local module in repo_upstream for all nodes
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_packages: [                  # default packages to be downloaded (if `repo_packages` is not explicitly set)
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
    ]
    repo_extra_packages: [            # default postgres packages to be downloaded
      pgsql-main
      #,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
      #,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      #,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
    ]

Caveat

17 - 5-node: oss

Building offline packages for 5 major OS distro simultaneously.

The oss template is a local build template used by Pigsty to build offline packages in parallel.


Overview

Usage: replace the pigsty.yml config with it:

cp conf/build/oss.yml pigsty.yml

IP address of this template is hard-coded


Content

Source: pigsty/conf/oss.yml


all:

  vars:
    version: v3.1.0
    admin_ip: 10.10.10.8
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"

    pg_version: 17
    repo_modules: infra,node,pgsql,docker #pg16,pg15,pg14,pg13,pg12,kube,mssql,ivory
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, docker ]
    repo_url_packages:                      # extra packages from url
      - { name: "pev.html"    ,url: "http://10.10.10.1/etc/pev-1.12.1.html"    }
      - { name: "chart.tgz"   ,url: "http://10.10.10.1/etc/chart-1.0.0.tgz"    }
      - { name: "plugins.tgz" ,url: "http://10.10.10.1/etc/plugins-11.3.0.tgz" }

  children:
    infra:
      hosts:
        10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  }
        10.10.10.12: { infra_seq: 3, admin_ip: 10.10.10.12 }
        10.10.10.22: { infra_seq: 4, admin_ip: 10.10.10.22 }
        10.10.10.24: { infra_seq: 5, admin_ip: 10.10.10.24 }
      vars: { node_conf: oltp }

    etcd: { hosts: { 10.10.10.8:  { etcd_seq: 1 }}, vars: {  etcd_cluster: etcd  } }

    rpm:  # overwrite rpm building upstream with local repo to accelerate building
      hosts:
        10.10.10.8:  { }
        10.10.10.9:  { }
      vars:
        repo_upstream:
          - { name: pigsty-local   ,description: 'Pigsty Local'      ,module: local   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty'  }} # used by intranet nodes
          - { name: pigsty-infra   ,description: 'Pigsty INFRA'      ,module: infra   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/yum/infra/$basearch' ,china: 'http://10.10.10.1/yum/infra/$basearch' }}
          - { name: pigsty-pgsql   ,description: 'Pigsty PGSQL'      ,module: pgsql   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/yum/pgsql/el$releasever.$basearch' ,china: 'http://10.10.10.1/yum/pgsql/el$releasever.$basearch' }}
          - { name: nginx          ,description: 'Nginx Repo'        ,module: infra   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://nginx.org/packages/rhel/$releasever/$basearch/' }}
          - { name: baseos         ,description: 'EL 8+ BaseOS'      ,module: node    ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/'          ,europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/'     }}
          - { name: appstream      ,description: 'EL 8+ AppStream'   ,module: node    ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/'  ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/'       ,europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/'  }}
          - { name: extras         ,description: 'EL 8+ Extras'      ,module: node    ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/'          ,europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/'     }}
          - { name: powertools     ,description: 'EL 8 PowerTools'   ,module: node    ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/'      ,europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
          - { name: crb            ,description: 'EL 9 CRB'          ,module: node    ,releases: [    9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/'        ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/'             ,europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/'        }}
          - { name: epel           ,description: 'EL 8+ EPEL'        ,module: node    ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/Everything/$basearch/' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/Everything/$basearch/' ,europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/'     }}
          - { name: pgdg-common    ,description: 'PostgreSQL Common' ,module: pgsql   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg-el8fix    ,description: 'PostgreSQL EL8FIX' ,module: pgsql   ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' } }
          - { name: pgdg-el9fix    ,description: 'PostgreSQL EL9FIX' ,module: pgsql   ,releases: [    9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/'  ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/' }}
          - { name: pgdg17         ,description: 'PostgreSQL 17'     ,module: pgsql   ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg17-nonfree ,description: 'PostgreSQL 17+'    ,module: pgsql   ,releases: [  8,9] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' }}
          - { name: timescaledb    ,description: 'TimescaleDB'       ,module: pgsql   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch'  }}
          # these repos are not used by default, but can be enabled by setting repo_modules
          - { name: pgdg-extras    ,description: 'PostgreSQL Extra'  ,module: extra   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg12         ,description: 'PostgreSQL 12'     ,module: pg12    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/12/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/12/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg12-nonfree ,description: 'PostgreSQL 12+'    ,module: pg12    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/12/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/12/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/12/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg13         ,description: 'PostgreSQL 13'     ,module: pg13    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg13-nonfree ,description: 'PostgreSQL 13+'    ,module: pg13    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg14         ,description: 'PostgreSQL 14'     ,module: pg14    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg14-nonfree ,description: 'PostgreSQL 14+'    ,module: pg14    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg15         ,description: 'PostgreSQL 15'     ,module: pg15    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg15-nonfree ,description: 'PostgreSQL 15+'    ,module: pg15    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg16         ,description: 'PostgreSQL 16'     ,module: pg16    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg16-nonfree ,description: 'PostgreSQL 16+'    ,module: pg16    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg17         ,description: 'PostgreSQL 17'     ,module: pg17    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' }}
          - { name: pgdg17-nonfree ,description: 'PostgreSQL 17+'    ,module: pg17    ,releases: [7,8,9] ,arch: [x86_64,        ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' }}
          - { name: docker-ce      ,description: 'Docker CE'         ,module: docker  ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable'    ,china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable'  ,europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
          - { name: kubernetes     ,description: 'Kubernetes'        ,module: kube    ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.31/rpm/', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/' }}
          - { name: wiltondb       ,description: 'WiltonDB'          ,module: mssql   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.copr.fedorainfracloud.org/results/wiltondb/wiltondb/epel-$releasever-$basearch/'  }}
          - { name: ivorysql       ,description: 'IvorySQL'          ,module: ivory   ,releases: [7,8,9] ,arch: [x86_64         ] ,baseurl: { default: 'https://yum.highgo.ca/dists/ivorysql-rpms/3/redhat/rhel-$releasever-$basearch', china: 'http://10.10.10.1/yum/ivory/el$releasever.$basearch' }}
          - { name: mysql          ,description: 'MySQL'             ,module: mysql   ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/yum/mysql-8.0-community/el/$releasever/$basearch/', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/yum/mysql-8.0-community-el7-$basearch/' }}
          - { name: grafana        ,description: 'Grafana'           ,module: grafana ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpm.grafana.com' }}

    deb:  # overwrite deb building upstream with local repo to accelerate building
      hosts:
        10.10.10.12: { }
        10.10.10.22: { }
        10.10.10.24: { }
      vars:
        repo_upstream:
          - { name: pigsty-local  ,description: 'Pigsty Local'     ,module: local     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty ./' }}
          - { name: pigsty-pgsql  ,description: 'Pigsty PgSQL'     ,module: pgsql     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/apt/pgsql/${distro_codename} ${distro_codename} main', china: 'http://10.10.10.1/apt/pgsql/${distro_codename} ${distro_codename} main' }}
          - { name: pigsty-infra  ,description: 'Pigsty Infra'     ,module: infra     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/apt/infra/ generic main' ,china: 'http://10.10.10.1/apt/infra/ generic main' }}
          - { name: nginx         ,description: 'Nginx'            ,module: infra     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://nginx.org/packages/${distro_name} ${distro_codename} nginx' }}
          - { name: base          ,description: 'Debian Basic'     ,module: node      ,releases: [11,12         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename} main non-free-firmware'         ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename} main restricted universe multiverse' }}
          - { name: updates       ,description: 'Debian Updates'   ,module: node      ,releases: [11,12         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename}-updates main non-free-firmware' ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename}-updates main restricted universe multiverse' }}
          - { name: security      ,description: 'Debian Security'  ,module: node      ,releases: [11,12         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://security.debian.org/debian-security ${distro_codename}-security main non-free-firmware' }}
          - { name: base          ,description: 'Ubuntu Basic'     ,module: node      ,releases: [      20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}           main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}           main restricted universe multiverse' }}
          - { name: updates       ,description: 'Ubuntu Updates'   ,module: node      ,releases: [      20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' }}
          - { name: backports     ,description: 'Ubuntu Backports' ,module: node      ,releases: [      20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' }}
          - { name: security      ,description: 'Ubuntu Security'  ,module: node      ,releases: [      20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' }}
          - { name: base          ,description: 'Ubuntu Basic'     ,module: node      ,releases: [      20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}           main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}           main restricted universe multiverse' }}
          - { name: updates       ,description: 'Ubuntu Updates'   ,module: node      ,releases: [      20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-updates   main restricted universe multiverse' }}
          - { name: backports     ,description: 'Ubuntu Backports' ,module: node      ,releases: [      20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-security  main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' }}
          - { name: security      ,description: 'Ubuntu Security'  ,module: node      ,releases: [      20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-updates   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-security  main restricted universe multiverse' }}
          - { name: pgdg          ,description: 'PGDG'             ,module: pgsql     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/apt/ ${distro_codename}-pgdg main' }}
          - { name: citus         ,description: 'Citus'            ,module: pgsql     ,releases: [11,12,20,22   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/citusdata/community/${distro_name}/ ${distro_codename} main' } }
          - { name: timescaledb   ,description: 'Timescaledb'      ,module: pgsql     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/${distro_name}/ ${distro_codename} main' }}
          # these repos are not used by default, but can be enabled by setting repo_modules
          - { name: grafana      ,description: 'Grafana'           ,module: grafana   ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.grafana.com stable main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/grafana/apt/ stable main' }}
          - { name: pgml         ,description: 'PostgresML'        ,module: pgml      ,releases: [         22   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.postgresml.org ${distro_codename} main'  }}
          - { name: wiltondb     ,description: 'WiltonDB'          ,module: mssql     ,releases: [      20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/wiltondb/wiltondb/ubuntu/ ${distro_codename} main', china: 'http://10.10.10.1/apt/mssql/${distro_codename} ${distro_codename} main'  }}
          - { name: mysql        ,description: 'MySQL'             ,module: mysql     ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools' }}
          - { name: docker-ce    ,description: 'Docker'            ,module: docker    ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux//${distro_name} ${distro_codename} stable' }}
          - { name: kubernetes   ,description: 'Kubernetes'        ,module: kube      ,releases: [11,12,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/deb/ /' }}



    #----------------------------------#
    # build el8 packages on Rocky 8.9
    #----------------------------------#
    el8:
      hosts: { 10.10.10.8: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-el8
        #repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
        repo_extra_packages: [pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl]
        pg_extensions:
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud q3c geoip pg_polyline pg_geohash #pg_h3 #ogr_fdw #mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart plproxy #citus #hydra #pg_mooncake #pg_partman #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh #plprql #pljava #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl timestamp9 chkpass #debversion #pg_rrule
          - topn pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_readonly safeupdate pg_permissions pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pg_checksums #pgautofailover #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers multicorn mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw redis_fdw pg_redis_pubsub hdfs_fdw firebird_fdw aws_s3 log_fdw #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #mongo_fdw #kafka_fdw
          - orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload


    #----------------------------------#
    # build el9 packages on Rocky 9.3
    #----------------------------------#
    el9:
      hosts: { 10.10.10.9: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-el9
        #repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
        repo_extra_packages: [pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl]
        pg_extensions:
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud q3c geoip pg_polyline pg_geohash #pg_h3 #ogr_fdw #mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart plproxy #citus #hydra #pg_mooncake #pg_partman #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh pljava #plprql #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl timestamp9 chkpass #debversion #pg_rrule
          - topn pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_readonly safeupdate pg_permissions pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pg_checksums #pgautofailover #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers multicorn mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw redis_fdw pg_redis_pubsub hdfs_fdw firebird_fdw aws_s3 log_fdw #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #mongo_fdw #kafka_fdw
          - orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload


    #----------------------------------#
    # build d12 packages on Debian 12
    #----------------------------------#
    d12:
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-d12
        #repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
        repo_extra_packages: [pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl]
        pg_extensions:
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #citus #hydra #pg_mooncake #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh pljava #plprql #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl debversion pg_rrule timestamp9 chkpass
          - pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode #topn
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_checksums pg_readonly safeupdate pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers mysql_fdw tds_fdw redis_fdw pg_redis_pubsub firebird_fdw aws_s3 log_fdw #multicorn #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #sqlite_fdw #pgbouncer_fdw #mongo_fdw #kafka_fdw #hdfs_fdw
          - orafce pgtt session_variable pg_statement_rollback pgmemcache #pg_dbms_metadata #pg_dbms_lock #pg_dbms_job
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload


    #----------------------------------#
    # build u22 packages on Ubuntu 22.04
    #----------------------------------#
    u22:
      hosts: { 10.10.10.22: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-u22
        #repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
        repo_extra_packages: [pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl]
        pg_extensions:
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #citus #hydra #pg_mooncake #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh pljava #plprql #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl debversion pg_rrule timestamp9 chkpass
          - pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode #topn
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_checksums pg_readonly safeupdate pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers mysql_fdw tds_fdw redis_fdw pg_redis_pubsub firebird_fdw aws_s3 log_fdw #multicorn #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #sqlite_fdw #pgbouncer_fdw #mongo_fdw #kafka_fdw #hdfs_fdw
          - orafce pgtt session_variable pg_statement_rollback pgmemcache #pg_dbms_metadata #pg_dbms_lock #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload


    #----------------------------------#
    # build u24 packages on Ubuntu 24.04
    #----------------------------------#
    u24:
      hosts: { 10.10.10.24: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-u24
        #repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility ]
        repo_extra_packages: [pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl]
        pg_extensions:
          - timescaledb periods temporal_tables emaj table_version pg_cron pg_later pg_background #timescaledb_toolkit #pg_timeseries
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash mobilitydb
          - pgvector pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pg_bigm zhparser hunspell
          - pg_analytics pg_duckdb duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #citus #hydra #pg_mooncake #pg_strom
          - hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pgmq pgq pg_cardano #age #rdkit
          - pg_tle plv8 pllua pldebugger plpgsql_check plprofiler plsh pljava #plprql #plr #pgtap #faker #dbt2
          - prefix semver pgunit md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_currency pgmp numeral pg_rational pguint pg_uint128 ip4r pg_uri pgemailaddr acl debversion pg_rrule timestamp9 chkpass
          - pg_gzip pg_zstd pg_http pg_net pg_smtp_client pg_html5_email_address pgsql_tweaks pg_extra_time pg_timeit count_distinct extra_window_functions first_last_agg tdigest aggs_for_vecs aggs_for_arrays pg_arraymath quantile lower_quantile
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids pg_math pg_random pg_base36 pg_base62 pg_base58 floatvec pg_financial pgjwt pg_hashlib shacrypt cryptint pg_ecdsa pgpcre icu_ext pgqr envvar pg_protobuf url_encode #topn
          - pg_repack pg_squeeze pg_dirtyread pgfincore ddlx pg_prioritize pg_checksums pg_readonly safeupdate pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_savior table_log pg_fio #pgdd #pgpool #pgagent
          - pg_profile pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis #pg_store_plans #pg_top #powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt anonymizer pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils noset #pg_tde
          - wrappers mysql_fdw tds_fdw redis_fdw pg_redis_pubsub firebird_fdw aws_s3 log_fdw #multicorn #odbc_fdw #jdbc_fdw #oracle_fdw #db2_fdw #sqlite_fdw #pgbouncer_fdw #mongo_fdw #kafka_fdw #hdfs_fdw
          - orafce pgtt session_variable pg_statement_rollback pgmemcache #pg_dbms_metadata #pg_dbms_lock #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots wal2json decoder_raw mimeo pg_fact_loader #wal2mongo #decoderbufs #repmgr #pg_bulkload

Caveat

This building config aims to building offline packages on local network, and it is not suitable for building packages on the public internet.

The http://10.10.10.1 repo can be replaced with https://repo.pigsty.io to complete the building.

18 - 5-node: ext

5-node extension building template for 5 major OS distro

The ext template is a 5-node local build template used by Pigsty to build PostgreSQL extensions.


Overview

Usage: replace the pigsty.yml config with it:

cp conf/build/ext.yml pigsty.yml

IP address of this template is hard-coded


Content

Source: pigsty/conf/ext.yml

all:

  children:
    infra:
      hosts:
        #10.10.10.7:  { infra_seq: 0, admin_ip: 10.10.10.7  }
        10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  }
        10.10.10.12: { infra_seq: 3, admin_ip: 10.10.10.12 }
        10.10.10.22: { infra_seq: 4, admin_ip: 10.10.10.22 }
        10.10.10.24: { infra_seq: 5, admin_ip: 10.10.10.24 }
      vars: { node_conf: oltp }

    etcd: { hosts: { 10.10.10.8:  { etcd_seq: 1 }}, vars: {  etcd_cluster: etcd  } }

    rpm:
      hosts:
        #10.10.10.7:  { infra_seq: 7, admin_ip: 10.10.10.7  }
        10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  }
      vars:
        repo_upstream:
          #- { name: pigsty-local      ,description: 'Pigsty Local'      ,module: local ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty'  }} # used by intranet nodes
          - { name: pigsty-infra      ,description: 'Pigsty INFRA'      ,module: infra ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/yum/infra/$basearch' }}
          - { name: pigsty-pgsql      ,description: 'Pigsty PGSQL'      ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://10.10.10.1/yum/pgsql/el$releasever.$basearch' }}
          - { name: base              ,description: 'EL 7 Base'         ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://vault.centos.org/7.9.2009/os/$basearch/'                                ,china: 'https://mirrors.aliyun.com/centos/$releasever/os/$basearch/' } }
          - { name: updates           ,description: 'EL 7 Updates'      ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://vault.centos.org/7.9.2009/updates/$basearch/'                           ,china: 'https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/' } }
          - { name: extras            ,description: 'EL 7 Extras'       ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://vault.centos.org/7.9.2009/extras/$basearch/'                            ,china: 'https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/' } }
          - { name: epel              ,description: 'EL 7 EPEL'         ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/$basearch/'              ,china: 'https://mirrors.aliyun.com/epel/$releasever/$basearch/' ,europe: 'https://mirrors.xtom.de/epel/$releasever/$basearch/' } }
          - { name: centos-sclo       ,description: 'EL 7 SCLo'         ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://vault.centos.org/7.9.2009/sclo/$basearch/sclo/'                         ,china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/sclo/' } }
          - { name: centos-sclo-rh    ,description: 'EL 7 SCLo rh'      ,module: node  ,releases: [7    ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://vault.centos.org/7.9.2009/sclo/$basearch/rh/'                           ,china: 'https://mirrors.aliyun.com/centos/$releasever/sclo/$basearch/rh/' } }
          - { name: baseos            ,description: 'EL 8+ BaseOS'      ,module: node  ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/'           ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/'           ,europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/' } }
          - { name: appstream         ,description: 'EL 8+ AppStream'   ,module: node  ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/'        ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/'        ,europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/' } }
          - { name: extras            ,description: 'EL 8+ Extras'      ,module: node  ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/'           ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/'           ,europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/' } }
          - { name: powertools        ,description: 'EL 8 PowerTools'   ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/'       ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/'       ,europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' } }
          - { name: HighAvailability  ,description: 'EL 8 HA'           ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/HighAvailability/$basearch/os/' ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/HighAvailability/$basearch/os/' ,europe: 'https://mirrors.xtom.de/rocky/$releasever/HighAvailability/$basearch/os/' } }
          - { name: NFV               ,description: 'EL 8 NFV'          ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/NFV/$basearch/os/'              ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/NFV/$basearch/os/'              ,europe: 'https://mirrors.xtom.de/rocky/$releasever/NFV/$basearch/os/' } }
          - { name: RT                ,description: 'EL 8 RT'           ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/RT/$basearch/os/'               ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/RT/$basearch/os/'               ,europe: 'https://mirrors.xtom.de/rocky/$releasever/RT/$basearch/os/' } }
          - { name: plus              ,description: 'EL 8 plus'         ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/plus/$basearch/os/'             ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/plus/$basearch/os/'             ,europe: 'https://mirrors.xtom.de/rocky/$releasever/plus/$basearch/os/' } }
          - { name: devel             ,description: 'EL 8 devel'        ,module: node  ,releases: [  8  ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/devel/$basearch/os/'            ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/devel/$basearch/os/'            ,europe: 'https://mirrors.xtom.de/rocky/$releasever/devel/$basearch/os/' } }
          - { name: crb               ,description: 'EL 9 CRB'          ,module: node  ,releases: [    9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/'              ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/'              ,europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/' } }
          - { name: epel              ,description: 'EL 8+ EPEL'        ,module: node  ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://download.fedoraproject.org/pub/epel/$releasever/Everything/$basearch/'   ,china: 'https://mirrors.tuna.tsinghua.edu.cn/epel/$releasever/Everything/$basearch/'      ,europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/' } }
          - { name: pgdg-common       ,description: 'PostgreSQL Common' ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg-srpm         ,description: 'PostgreSQL SRPM'   ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/srpms/common/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/srpms/common/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/srpms/common/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg-el8fix       ,description: 'PostgreSQL EL8FIX' ,module: pgsql ,releases: [    8] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-x86_64/' } }
          - { name: pgdg-el9fix       ,description: 'PostgreSQL EL9FIX' ,module: pgsql ,releases: [    9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/'  ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-x86_64/' } }
          - { name: pgdg12            ,description: 'PostgreSQL 12'     ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/12/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/12/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/12/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg13            ,description: 'PostgreSQL 13'     ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg14            ,description: 'PostgreSQL 14'     ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg15            ,description: 'PostgreSQL 15'     ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg16            ,description: 'PostgreSQL 16'     ,module: pgsql ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg17            ,description: 'PostgreSQL 17'     ,module: pgsql ,releases: [  8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' } }
          - { name: pgdg-extras       ,description: 'PostgreSQL Extra'  ,module: extra ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' , europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rhel$releasever-extras/redhat/rhel-$releasever-$basearch' } }
          - { name: timescaledb       ,description: 'TimescaleDB'       ,module: pgsql ,releases: [7,8,9] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch' } }

    #----------------------------------#
    # build el8 packages on Rocky 8.9
    #----------------------------------#
    el8:
      hosts: { 10.10.10.8: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-el8
        node_default_packages:
          - python3,python3-pip,python3-virtualenv
          - pgdg-srpm-macros,postgresql1*-devel,postgresql1*-server
          - pkg-config,dnf-utils,dnf-plugins-core,modulemd-tools
          - rpm-build,rpmdevtools,createrepo_c,createrepo,jq,firebird-devel,libfq
          - llvm,llvm-devel,clang,ccache,flex,bison,make,cmake,CUnit,ninja-build
          - git,ncdu,wget,openssl,openblas*
          - readline-devel,zlib-devel,lz4-devel,libzstd-devel,openssl-devel,krb5-devel,hiredis-devel,gsl-devel,python3-docutils
          - libcurl-devel,libxml2-devel,gd-devel,pcre-devel,libical-devel,clamav-devel,uriparser-devel,librdkafka-devel,libsodium-devel
          - python3.11,python3.11-devel
          - libomp,libomp-devel,lld

    #----------------------------------#
    # build el9 packages on Rocky 9.3
    #----------------------------------#
    el9:
      hosts: { 10.10.10.9: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-el9
        node_default_packages:
          - python3,python3-pip,python3-virtualenv
          - pgdg-srpm-macros,postgresql1*-devel,postgresql1*-server
          - pkg-config,dnf-utils,dnf-plugins-core,modulemd-tools
          - rpm-build,rpmdevtools,createrepo_c,createrepo,jq,firebird-devel,libfq
          - llvm,llvm-devel,clang,ccache,flex,bison,make,cmake,CUnit,ninja-build
          - git,ncdu,wget,openssl,openblas*
          - readline-devel,zlib-devel,lz4-devel,libzstd-devel,openssl-devel,krb5-devel,hiredis-devel,gsl-devel,python3-docutils
          - libcurl-devel,libxml2-devel,gd-devel,pcre-devel,libical-devel,clamav-devel,uriparser-devel,librdkafka-devel,libsodium-devel
          - python3.11,python3.11-devel
          - libomp,libomp-devel,lld
          - cpanminus

    #----------------------------------#
    # build d12 packages on Debian 12
    #----------------------------------#
    d12:
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-d12
        node_default_packages:
          - postgresql-all
          - postgresql-server-dev-all
          - build-essential,debhelper,devscripts,fakeroot,pkg-config
          - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
          - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,chrony,zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,openssh-server,openssh-client
          - ninja-build,flex,bison,make,cmake,git,ncdu,wget,libtinfo5,libstdc++-12-dev,firebird-dev
          - libreadline-dev,zlib1g-dev,libicu-dev,libssl-dev,libpq-dev,libxml2-dev,libxslt1-dev,libldap2-dev,libperl-dev,libsodium23,libsodium-dev,libgd-dev,libgc-dev,libpam0g-dev,libsqlite3-dev,libhiredis-dev,libgsl-dev
          - python3-dev,tcl-dev,libkrb5-dev,libsasl2-dev,uuid-dev,libossp-uuid-dev,gettext,libcurl4-openssl-dev,libopenblas-dev,liblz4-dev,libzstd-dev,libpulse-dev,liburiparser-dev,libcrack2-dev,librdkafka-dev,python3-docutils



    #----------------------------------#
    # build u22 packages on Ubuntu 22.04
    #----------------------------------#
    u22:
      hosts: { 10.10.10.22: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-u22
        node_default_packages:
          - postgresql-all
          - postgresql-server-dev-all
          - build-essential,debhelper,devscripts,fakeroot,pkg-config
          - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
          - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,chrony,zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,openssh-server,openssh-client
          - ninja-build,flex,bison,make,cmake,git,ncdu,wget,lld,libtinfo5,libstdc++-12-dev,firebird-dev
          - libreadline-dev,zlib1g-dev,libicu-dev,libssl-dev,libpq-dev,libxml2-dev,libxslt1-dev,libldap2-dev,libperl-dev,libsodium23,libsodium-dev,libgd-dev,libgc-dev,libpam0g-dev,libsqlite3-dev,libhiredis-dev,libgsl-dev
          - python3-dev,tcl-dev,libkrb5-dev,libsasl2-dev,uuid-dev,libossp-uuid-dev,gettext,libcurl4-openssl-dev,libopenblas-dev,liblz4-dev,libzstd-dev,libpulse-dev,liburiparser-dev,libcrack2-dev,librdkafka-dev,python3-docutils

    #----------------------------------#
    # build u24 packages on Ubuntu 24.04
    #----------------------------------#
    u24:
      hosts: { 10.10.10.24: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-u24
        node_default_packages:
          - postgresql-all
          - postgresql-server-dev-all
          - build-essential,debhelper,devscripts,fakeroot,pkg-config
          - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
          - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,chrony,zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,openssh-server,openssh-client
          - ninja-build,flex,bison,make,cmake,git,ncdu,wget,lld,libstdc++-12-dev,firebird-dev #libtinfo5
          - libreadline-dev,zlib1g-dev,libicu-dev,libssl-dev,libpq-dev,libxml2-dev,libxslt1-dev,libldap2-dev,libperl-dev,libsodium23,libsodium-dev,libgd-dev,libgc-dev,libpam0g-dev,libsqlite3-dev,libhiredis-dev,libgsl-dev
          - python3-dev,tcl-dev,libkrb5-dev,libsasl2-dev,uuid-dev,libossp-uuid-dev,gettext,libcurl4-openssl-dev,libopenblas-dev,liblz4-dev,libzstd-dev,libpulse-dev,liburiparser-dev,libcrack2-dev,librdkafka-dev,python3-docutils


    #----------------------------------#
    # build el7 packages on CentOS 7.9
    #----------------------------------#
    #el7:
    #  hosts: { 10.10.10.7: { pg_seq: 1, pg_role: primary } }
    #  vars:
    #    pg_cluster: pg-el7
    #    node_default_packages:
    #      - python3,python3-pip,python3-virtualenv
    #      - rpm-build,rpmdevtools,createrepo_c,createrepo
    #      - llvm,llvm-devel,clang,ccache,flex,bison,make,cmake,CUnit
    #      - git,ncdu,wget,openssl,openblas*
    #      - readline-devel,zlib-devel,lz4-devel,libzstd-devel,openssl-devel,krb5-devel
    #      - hiredis-devel,gsl-devel,python3-docutils,uriparser-devel,librdkafka-devel
    #      - libcurl-devel,libxml2-devel,gd-devel,pcre-devel,libical-devel,clamav-devel
    #      - pgdg-srpm-macros,postgresql1*-devel,postgresql1*-server
    #      - pkgconfig,yum-utils

  vars:
    version: v3.1.0
    admin_ip: 10.10.10.8
    region: china
    etcd_clean: true
    proxy_env: # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
    node_conf: oltp
    node_repo_modules: infra,pgsql,node
    node_repo_remove: true            # remove existing repo on node?
    repo_url_packages: []

Caveat

19 - 36-node: prod

Production environment simulation with 43 nodes

Overview

cp -f conf/prod.yml pigsty.yml

Content

Source: pigsty/conf/prod.yml

all:

  children:

    #==========================================================#
    # infra: 2 nodes
    #==========================================================#
    # ./infra.yml -l infra
    # ./docker.yml -l infra (optional)
    infra:
      hosts:
        10.10.10.10: {}
        10.10.10.11: {}
      vars:
        docker_enabled: true
        node_conf: oltp         # use oltp template for infra nodes
        pg_conf: oltp.yml       # use oltp template for infra pgsql
        pg_exporters:           # bin/pgmon-add pg-meta2/pg-test2/pg-src2/pg-dst2
          20001: {pg_cluster: pg-meta2   ,pg_seq: 1 ,pg_host: 10.10.10.10, pg_databases: [{ name: meta }]}
          20002: {pg_cluster: pg-meta2   ,pg_seq: 2 ,pg_host: 10.10.10.11, pg_databases: [{ name: meta }]}

          20003: {pg_cluster: pg-test2   ,pg_seq: 1 ,pg_host: 10.10.10.41, pg_databases: [{ name: test }]}
          20004: {pg_cluster: pg-test2   ,pg_seq: 2 ,pg_host: 10.10.10.42, pg_databases: [{ name: test }]}
          20005: {pg_cluster: pg-test2   ,pg_seq: 3 ,pg_host: 10.10.10.43, pg_databases: [{ name: test }]}
          20006: {pg_cluster: pg-test2   ,pg_seq: 4 ,pg_host: 10.10.10.44, pg_databases: [{ name: test }]}

          20007: {pg_cluster: pg-src2    ,pg_seq: 1 ,pg_host: 10.10.10.45, pg_databases: [{ name: src }]}
          20008: {pg_cluster: pg-src2    ,pg_seq: 2 ,pg_host: 10.10.10.46, pg_databases: [{ name: src }]}
          20009: {pg_cluster: pg-src2    ,pg_seq: 3 ,pg_host: 10.10.10.47, pg_databases: [{ name: src }]}

          20010: {pg_cluster: pg-dst2    ,pg_seq: 3 ,pg_host: 10.10.10.48, pg_databases: [{ name: dst }]}
          20011: {pg_cluster: pg-dst2    ,pg_seq: 4 ,pg_host: 10.10.10.49, pg_databases: [{ name: dst }]}


    #==========================================================#
    # nodes: 36 nodes
    #==========================================================#
    # ./node.yml
    nodes:
      hosts:
        10.10.10.10 : { nodename: meta1  ,node_cluster: meta   ,pg_cluster: pg_meta  ,pg_seq: 1 ,pg_role: primary, infra_seq: 1 }
        10.10.10.11 : { nodename: meta2  ,node_cluster: meta   ,pg_cluster: pg_meta  ,pg_seq: 2 ,pg_role: replica, infra_seq: 2 }
        10.10.10.12 : { nodename: pg12   ,node_cluster: pg12   ,pg_cluster: pg-v12   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.13 : { nodename: pg13   ,node_cluster: pg13   ,pg_cluster: pg-v13   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.14 : { nodename: pg14   ,node_cluster: pg14   ,pg_cluster: pg-v14   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.15 : { nodename: pg15   ,node_cluster: pg15   ,pg_cluster: pg-v15   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.16 : { nodename: pg16   ,node_cluster: pg16   ,pg_cluster: pg-v16   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.17 : { nodename: pg17   ,node_cluster: pg17   ,pg_cluster: pg-v17   ,pg_seq: 1 ,pg_role: primary }
        10.10.10.18 : { nodename: proxy1 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: master }
        10.10.10.19 : { nodename: proxy2 ,node_cluster: proxy  ,vip_address: 10.10.10.20 ,vip_vrid: 20 ,vip_interface: eth1 ,vip_role: backup }
        10.10.10.21 : { nodename: minio1 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 1 ,etcd_cluster: etcd ,etcd_seq: 1}
        10.10.10.22 : { nodename: minio2 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 2 ,etcd_cluster: etcd ,etcd_seq: 2}
        10.10.10.23 : { nodename: minio3 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 3 ,etcd_cluster: etcd ,etcd_seq: 3}
        10.10.10.24 : { nodename: minio4 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 4 ,etcd_cluster: etcd ,etcd_seq: 4}
        10.10.10.25 : { nodename: minio5 ,node_cluster: minio  ,minio_cluster: minio ,minio_seq: 5 ,etcd_cluster: etcd ,etcd_seq: 5}
        10.10.10.40 : { nodename: node40 ,node_id_from_pg: true }
        10.10.10.41 : { nodename: node41 ,node_id_from_pg: true }
        10.10.10.42 : { nodename: node42 ,node_id_from_pg: true }
        10.10.10.43 : { nodename: node43 ,node_id_from_pg: true }
        10.10.10.44 : { nodename: node44 ,node_id_from_pg: true }
        10.10.10.45 : { nodename: node45 ,node_id_from_pg: true }
        10.10.10.46 : { nodename: node46 ,node_id_from_pg: true }
        10.10.10.47 : { nodename: node47 ,node_id_from_pg: true }
        10.10.10.48 : { nodename: node48 ,node_id_from_pg: true }
        10.10.10.49 : { nodename: node49 ,node_id_from_pg: true }
        10.10.10.50 : { nodename: node50 ,node_id_from_pg: true }
        10.10.10.51 : { nodename: node51 ,node_id_from_pg: true }
        10.10.10.52 : { nodename: node52 ,node_id_from_pg: true }
        10.10.10.53 : { nodename: node53 ,node_id_from_pg: true }
        10.10.10.54 : { nodename: node54 ,node_id_from_pg: true }
        10.10.10.55 : { nodename: node55 ,node_id_from_pg: true }
        10.10.10.56 : { nodename: node56 ,node_id_from_pg: true }
        10.10.10.57 : { nodename: node57 ,node_id_from_pg: true }
        10.10.10.58 : { nodename: node58 ,node_id_from_pg: true }
        10.10.10.59 : { nodename: node59 ,node_id_from_pg: true }
        10.10.10.88 : { nodename: test   }

    #==========================================================#
    # etcd: 5 nodes used as dedicated minio cluster
    #==========================================================#
    # ./etcd.yml -l etcd;
    etcd:
      hosts:
        10.10.10.21: {}
        10.10.10.22: {}
        10.10.10.23: {}
        10.10.10.24: {}
        10.10.10.25: {}
      vars: {}

    #==========================================================#
    # minio: 3 nodes used as dedicated minio cluster
    #==========================================================#
    # ./minio.yml -l minio;
    minio:
      hosts:
        10.10.10.21: {}
        10.10.10.22: {}
        10.10.10.23: {}
        10.10.10.24: {}
        10.10.10.25: {}
      vars:
        minio_data: '/data{1...4}' # 5 node x 4 disk

    #==========================================================#
    # proxy: 2 nodes used as dedicated haproxy server
    #==========================================================#
    # ./node.yml -l proxy
    proxy:
      hosts:
        10.10.10.18: {}
        10.10.10.19: {}
      vars:
        vip_enabled: true
        haproxy_services:      # expose minio service : sss.pigsty:9000
          - name: minio        # [REQUIRED] service name, unique
            port: 9000         # [REQUIRED] service port, unique
            balance: leastconn # Use leastconn algorithm and minio health check
            options: [ "option httpchk", "option http-keep-alive", "http-check send meth OPTIONS uri /minio/health/live", "http-check expect status 200" ]
            servers:           # reload service with ./node.yml -t haproxy_config,haproxy_reload
              - { name: minio-1 ,ip: 10.10.10.21 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.22 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.23 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.24 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-5 ,ip: 10.10.10.25 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

    #==========================================================#
    # pg-meta: reuse infra node as meta cmdb
    #==========================================================#
    # ./pgsql.yml -l pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1 , pg_role: primary }
        10.10.10.11: { pg_seq: 2 , pg_role: replica }
      vars:
        pg_cluster: pg-meta
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector}]}
          - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
          - { name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database }
        pg_hba_rules:
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        node_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 postgres /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 postgres /pg/bin/pg-backup'

    #==========================================================#
    # pg-v12 - v16 (no longer supported in OSS ver since 2.6)
    #==========================================================#
    # ./pgsql.yml -l pg-v*
    pg-v12:
      hosts: { 10.10.10.12: {}}
      vars:
        pg_version: 12
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10012
        pg_default_services:  [{ name: primary ,port: 10012 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    pg-v13:
      hosts: { 10.10.10.13: {}}
      vars:
        pg_version: 13
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10013
        pg_default_services:  [{ name: primary ,port: 10013 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    pg-v14:
      hosts: { 10.10.10.14: {}}
      vars:
        pg_version: 14
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10014
        pg_default_services:  [{ name: primary ,port: 10014 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    pg-v15:
      hosts: { 10.10.10.15: {}}
      vars:
        pg_version: 15
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10015
        pg_default_services:  [{ name: primary ,port: 10015 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    pg-v16:
      hosts: { 10.10.10.16: {}}
      vars:
        pg_version: 16
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10016
        pg_default_services:  [{ name: primary ,port: 10016 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    pg-v17:
      hosts: { 10.10.10.17: {}}
      vars:
        pg_version: 17
        pg_service_provider: proxy       # use load balancer on group `proxy` with port 10017
        pg_default_services:  [{ name: primary ,port: 10017 ,dest: postgres  ,check: /primary   ,selector: "[]" }]

    #==========================================================#
    # pg-pitr: single node
    #==========================================================#
    # ./pgsql.yml -l pg-pitr
    pg-pitr:
      hosts:
        10.10.10.40: { pg_seq: 1 ,pg_role: primary }
      vars:
        pg_cluster: pg-pitr
        pg_databases: [{ name: test }]

    #==========================================================#
    # pg-test: dedicate 4 node testing cluster
    #==========================================================#
    # ./pgsql.yml -l pg-test
    pg-test:
      hosts:
        10.10.10.41: { pg_seq: 1 ,pg_role: primary }
        10.10.10.42: { pg_seq: 2 ,pg_role: replica }
        10.10.10.43: { pg_seq: 3 ,pg_role: replica }
        10.10.10.44: { pg_seq: 4 ,pg_role: replica }
      vars:
        pg_cluster: pg-test
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]


    #==========================================================#
    # pg-src: dedicate 3 node testing cluster
    #==========================================================#
    # ./pgsql.yml -l pg-src
    pg-src:
      hosts:
        10.10.10.45: { pg_seq: 1 ,pg_role: primary }
        10.10.10.46: { pg_seq: 2 ,pg_role: replica }
        10.10.10.47: { pg_seq: 3 ,pg_role: replica }
      vars:
        pg_cluster: pg-src
        #pg_version: 14
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.4/24
        pg_vip_interface: eth1
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: src }]


    #==========================================================#
    # pg-dst: dedicate 2 node testing cluster
    #==========================================================#
    # ./pgsql.yml -l pg-dst
    pg-dst:
      hosts:
        10.10.10.48: { pg_seq: 1 ,pg_role: primary } # 8C 8G
        10.10.10.49: { pg_seq: 2 ,pg_role: replica } # 1C 2G
      vars:
        pg_cluster: pg-dst
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.5/24
        pg_vip_interface: eth1
        node_hugepage_ratio: 0.3
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: dst } ]


    #==========================================================#
    # pg-citus: 10 node citus cluster (5 x primary-replica pair)
    #==========================================================#
    pg-citus: # citus group
      hosts:
        10.10.10.50: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 0, pg_role: primary }
        10.10.10.51: { pg_group: 0, pg_cluster: pg-citus0 ,pg_vip_address: 10.10.10.60/24 ,pg_seq: 1, pg_role: replica }
        10.10.10.52: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 0, pg_role: primary }
        10.10.10.53: { pg_group: 1, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.61/24 ,pg_seq: 1, pg_role: replica }
        10.10.10.54: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 0, pg_role: primary }
        10.10.10.55: { pg_group: 2, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.62/24 ,pg_seq: 1, pg_role: replica }
        10.10.10.56: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 0, pg_role: primary }
        10.10.10.57: { pg_group: 3, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.63/24 ,pg_seq: 1, pg_role: replica }
        10.10.10.58: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 0, pg_role: primary }
        10.10.10.59: { pg_group: 4, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.64/24 ,pg_seq: 1, pg_role: replica }
      vars:
        pg_mode: citus                    # pgsql cluster mode: citus
        pg_version: 16                    # citus does not have pg16 available
        pg_shard: pg-citus                # citus shard name: pg-citus
        pg_primary_db: test               # primary database used by citus
        pg_dbsu_password: DBUser.Postgres # all dbsu password access for citus cluster
        pg_vip_enabled: true
        pg_vip_interface: eth1
        pg_extensions: [ 'citus postgis pgvector' ]
        pg_libs: 'citus, pg_stat_statements, auto_explain' # citus will be added by patroni automatically
        pg_users: [ { name: test ,password: test ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: test ,owner: test ,extensions: [ { name: citus }, { name: vector } ] } ]
        pg_hba_rules:
          - { user: 'all' ,db: all  ,addr: 10.10.10.0/24 ,auth: trust ,title: 'trust citus cluster members'        }
          - { user: 'all' ,db: all  ,addr: 127.0.0.1/32  ,auth: ssl   ,title: 'all user ssl access from localhost' }
          - { user: 'all' ,db: all  ,addr: intra         ,auth: ssl   ,title: 'all user ssl access from intranet'  }

    #==========================================================#
    # redis-meta: reuse the 5 etcd nodes as redis sentinel
    #==========================================================#
    # ./redis.yml -l redis-meta
    redis-meta:
      hosts:
        10.10.10.21: { redis_node: 1 , redis_instances: { 26379: {} } }
        10.10.10.22: { redis_node: 2 , redis_instances: { 26379: {} } }
        10.10.10.23: { redis_node: 3 , redis_instances: { 26379: {} } }
        10.10.10.24: { redis_node: 4 , redis_instances: { 26379: {} } }
        10.10.10.25: { redis_node: 5 , redis_instances: { 26379: {} } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 256MB
        redis_sentinel_monitor:  # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-src, host: 10.10.10.45, port: 6379 ,password: redis.src, quorum: 1 }
          - { name: redis-dst, host: 10.10.10.48, port: 6379 ,password: redis.dst, quorum: 1 }

    #==========================================================#
    # redis-test: redis native cluster in 4 nodes, 12 instances
    #==========================================================#
    # ./node.yml -l redis-test; ./redis.yml -l redis-test
    redis-test:
      hosts:
        10.10.10.41: { redis_node: 1 ,redis_instances: { 6379: {} ,6380: {} ,6381: {} } }
        10.10.10.42: { redis_node: 2 ,redis_instances: { 6379: {} ,6380: {} ,6381: {} } }
        10.10.10.43: { redis_node: 3 ,redis_instances: { 6379: {} ,6380: {} ,6381: {} } }
        10.10.10.44: { redis_node: 4 ,redis_instances: { 6379: {} ,6380: {} ,6381: {} } }
      vars:
        redis_cluster: redis-test
        redis_password: 'redis.test'
        redis_mode: cluster
        redis_max_memory: 64MB

    #==========================================================#
    # redis-src: reuse pg-src 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-src
    redis-src:
      hosts:
        10.10.10.45: { redis_node: 1 , redis_instances: {6379: {  } }}
        10.10.10.46: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.45 6379' }, 6380: { replica_of: '10.10.10.46 6379' } }}
        10.10.10.47: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.45 6379' }, 6380: { replica_of: '10.10.10.47 6379' } }}
      vars:
        redis_cluster: redis-src
        redis_password: 'redis.src'
        redis_max_memory: 64MB

    #==========================================================#
    # redis-dst: reuse pg-dst 2 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-dst
    redis-dst:
      hosts:
        10.10.10.48: { redis_node: 1 , redis_instances: {6379: {  }                               }}
        10.10.10.49: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.48 6379' } }}
      vars:
        redis_cluster: redis-dst
        redis_password: 'redis.dst'
        redis_max_memory: 64MB

    #==========================================================#
    # ferret: reuse pg-src as mongo (ferretdb)
    #==========================================================#
    # ./mongo.yml -l ferret
    ferret:
      hosts:
        10.10.10.45: { mongo_seq: 1 }
        10.10.10.46: { mongo_seq: 2 }
        10.10.10.47: { mongo_seq: 3 }
      vars:
        mongo_cluster: ferret
        mongo_pgurl: 'postgres://test:[email protected]:5432/src'
        #mongo_pgurl: 'postgres://test:[email protected]:5436/test'


    #==========================================================#
    # test: running cli tools and test miscellaneous stuff
    #==========================================================#
    test:
      hosts: { 10.10.10.88: { nodename: test } }
      vars:
        node_cluster: test
        node_packages: [ 'etcd,logcli,mcli,redis' ]


  #============================================================#
  # Global Variables
  #============================================================#
  vars:

    #==========================================================#
    # INFRA
    #==========================================================#
    version: v3.1.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # domain names and upstream servers
      home         : { domain: h.pigsty }
      grafana      : { domain: g.pigsty ,endpoint: "10.10.10.10:3000" , websocket: true }
      prometheus   : { domain: p.pigsty ,endpoint: "10.10.10.10:9090" }
      alertmanager : { domain: a.pigsty ,endpoint: "10.10.10.10:9093" }
      blackbox     : { endpoint: "10.10.10.10:9115" }
      loki         : { endpoint: "10.10.10.10:3100" }
      minio        : { domain: m.pigsty    ,endpoint: "10.10.10.21:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888"  , websocket: true }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }
    nginx_navbar: []
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - 10.10.10.1 h.pigsty a.pigsty p.pigsty g.pigsty

    #==========================================================#
    # NODE
    #==========================================================#
    node_id_from_pg: false            # use nodename rather than pg identity as hostname
    node_conf: tiny                   # use small node template
    node_timezone: Asia/Hong_Kong     # use Asia/Hong_Kong Timezone
    node_dns_servers:                 # DNS servers in /etc/resolv.conf
      - 10.10.10.10
      - 10.10.10.11
    node_etc_hosts:
      - 10.10.10.10 h.pigsty a.pigsty p.pigsty g.pigsty
      - 10.10.10.20 sss.pigsty        # point minio serviec domain to the L2 VIP of proxy cluster
    node_ntp_servers:                 # NTP servers in /etc/chrony.conf
      - pool cn.pool.ntp.org iburst
      - pool 10.10.10.10 iburst
    node_admin_ssh_exchange: false    # exchange admin ssh key among node cluster

    #==========================================================#
    # PGSQL
    #==========================================================#
    pg_conf: tiny.yml
    pgbackrest_method: minio          # USE THE HA MINIO THROUGH A LOAD BALANCER
    pg_dbsu_ssh_exchange: false       # do not exchange dbsu ssh key among pgsql cluster
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:
        type: s3
        s3_endpoint: sss.pigsty       # s3_endpoint could be any load balancer: 10.10.10.1{0,1,2}, or domain names point to any of the 3 nodes
        s3_region: us-east-1          # you could use external domain name: sss.pigsty , which resolve to any members  (`minio_domain`)
        s3_bucket: pgsql              # instance & nodename can be used : minio-1.pigsty minio-1.pigsty minio-1.pigsty minio-1 minio-2 minio-3
        s3_key: pgbackrest            # Better using a new password for MinIO pgbackrest user
        s3_key_secret: S3User.Backup
        s3_uri_style: path
        path: /pgbackrest
        storage_port: 9000            # Use the load balancer port 9000
        storage_ca_file: /etc/pki/ca.crt
        bundle: y
        cipher_type: aes-256-cbc      # Better using a new cipher password for your production environment
        cipher_pass: pgBackRest.${pg_cluster}
        retention_full_type: time
        retention_full: 14

Caveat