2-node: dual
Categories:
This template illustrates a “half” high-availability PostgreSQL cluster with primary - replica replication, tolerating the failure of one specific node. It is a good choice if you only have two servers.
Overview
- Conf Name:
dual
- Node Count: 2-node,
pigsty/vagrant/spec/dual.rb
- Description: Dual-node template to set up a basic high-availability PostgreSQL cluster with master-slave replication, tolerating the failure of one node.
- OS Distro:
el8
,el9
,d12
,u22
,u24
- OS Arch:
x86_64
,aarch64
- Related:
Description
To enable: Use the -c dual
parameter during the configure
process:
./configure -c dual [-i <primary_ip>]
you may have to replace another node ip placeholder
10.10.10.11
Content
Source: pigsty/conf/dual.yml
all:
children:
# infra cluster for proxy, monitor, alert, etc..
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
# etcd cluster for ha postgres
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
# minio cluster, optional backup repo for pgbackrest
#minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }
# postgres cluster 'pg-meta' with single primary instance
pg-meta:
hosts:
10.10.10.10: { pg_seq: 1, pg_role: replica }
10.10.10.11: { pg_seq: 2, pg_role: primary } # <----- use this as primary by default
vars:
pg_cluster: pg-meta
pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
pg_users:
- { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] ,comment: pigsty admin user }
- { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
pg_vip_enabled: true
pg_vip_address: 10.10.10.2/24
pg_vip_interface: eth1
vars: # global parameters
version: v3.1.0 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default,china,europe
node_tune: oltp # node tuning specs: oltp,olap,tiny,crit
pg_conf: oltp.yml # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" }
#minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
# consider using local fs or external s3 service for cold backup storage in dual node configuration
#pgbackrest_method: minio
#----------------------------------#
# Repo, Node, Packages
#----------------------------------#
# if you wish to customize your own repo, change these settings:
repo_modules: infra,node,pgsql # install upstream repo during repo bootstrap
repo_remove: true # remove existing repo on admin node during repo bootstrap
node_repo_modules: local # install the local module in repo_upstream for all nodes
node_repo_remove: true # remove existing node repo for node managed by pigsty
repo_packages: [ # default packages to be downloaded (if `repo_packages` is not explicitly set)
node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common #,docker
]
repo_extra_packages: [ # default postgres packages to be downloaded
pgsql-main
#,pgsql-core,pgsql-time,pgsql-gis,pgsql-rag,pgsql-fts,pgsql-olap,pgsql-feat,pgsql-lang,pgsql-type,pgsql-func,pgsql-admin,pgsql-stat,pgsql-sec,pgsql-fdw,pgsql-sim,pgsql-etl,
#,pg17-core,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
#,pg16-core,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
]
Caveat
Typically, in production environments, a complete high-availability deployment requires at least three nodes to ensure that the cluster can continue operating normally if any single server fails. This is because high-availability failure detection DCS (etcd)/Patroni requires participation from a majority of nodes, which cannot be met with just two nodes.
However, sometimes only two servers are available. In such cases, the dual
template is a feasible option, assuming you have two servers:
- Node A,
10.10.10.10
: Defaults as the management node, running Infra infrastructure, single-node etcd, and a PostgreSQL standby. - Node B,
10.10.10.11
: Serves only as the PostgreSQL primary.
In this scenario, the dual-node template allows node B to fail and will automatically switch over to node A after a failure. However, if node A fails (the entire node crashes), manual intervention is required. Nevertheless, if node A is not completely offline but only etcd or PostgreSQL itself has issues, the whole system can still continue to operate normally.
This template uses an L2 VIP to achieve high-availability access. If your network conditions do not allow the use of an L2 VIP (for example, in restricted cloud env or across switch broadcast domains), you may consider using DNS resolution or other access methods instead.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.