Planning Architecture and Nodes

How many nodes? Which modules need HA? How to plan based on available resources and requirements?

Pigsty uses a modular architecture. You can combine modules like building blocks and express your intent through declarative configuration.

Common Patterns

Here are common deployment patterns for reference. Customize based on your requirements:

PatternINFRAETCDPGSQLMINIODescription
Single-node (meta)111Single-node deployment default
Slim deploy (slim)11Database only, no monitoring infra
Infra-only (infra)1Monitoring infrastructure only
Rich deploy (rich)1111Single-node + object storage + local repo with all extensions
Multi-node PatternINFRAETCDPGSQLMINIODescription
Two-node (dual)112Semi-HA, tolerates specific node failure
Three-node (trio)333Standard HA, tolerates any one failure
Four-node (full)111+3Demo setup, single INFRA/ETCD
Production (simu)23nn2 INFRA, 3 ETCD
Large-scale (custom)35nn3 INFRA, 5 ETCD

Your architecture choice depends on reliability requirements and available resources. Serious production deployments require at least 3 nodes for HA configuration. With only 2 nodes, use Semi-HA configuration.


Trade-offs

  • Pigsty monitoring requires at least 1 INFRA node. Production typically uses 2; large-scale deployments use 3.
  • PostgreSQL HA requires at least 1 ETCD node. Production typically uses 3; large-scale uses 5. Must be odd numbers.
  • Object storage (MinIO) requires at least 1 MINIO node. Production typically uses 4+ nodes in MNMD clusters.
  • Production PG clusters typically use at least two-node primary-replica configuration; serious deployments use 3 nodes; high read loads can have dozens of replicas.
  • For PostgreSQL, you can also use advanced configurations: offline instances, sync instances, standby clusters, delayed clusters, etc.

Single-Node Setup

The simplest configuration with everything on a single node. Installs four essential modules by default. Typically used for demos, devbox, or testing.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1

With an external S3/MinIO backup repository providing RTO/RPO guarantees, this configuration works for standard production environments.

Single-node variants:


Two-Node Setup

Two-node configuration enables database replication and Semi-HA capability with better data redundancy and limited failover support:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1 (replica)infra-1etcd-1
2node-2pg-meta-2 (primary)

Two-node HA auto-failover has limitations. This “Semi-HA” setup only auto-recovers from specific node failures:

  • If node-1 fails: No automatic failover—requires manual promotion of node-2
  • If node-2 fails: Automatic failover works—node-1 auto-promoted

Three-Node Setup

Three-node template provides true baseline HA configuration, tolerating any single node failure with automatic recovery.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-meta-2infra-2etcd-2
3node-3pg-meta-3infra-3etcd-3

Four-Node Setup

Pigsty Sandbox uses the standard four-node configuration.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-test-1
3node-3pg-test-2
4node-4pg-test-3

For demo purposes, INFRA / ETCD modules aren’t configured for HA. You can adjust further:

IDNODEPGSQLINFRAETCDMINIO
1node-1pg-meta-1infra-1etcd-1minio-1
2node-2pg-test-1infra-2etcd-2
3node-3pg-test-2etcd-3
4node-4pg-test-3

More Nodes

With proper virtualization infrastructure or abundant resources, you can use more nodes for dedicated deployment of each module, achieving optimal reliability, observability, and performance.

IDNODEINFRAETCDMINIOPGSQL
110.10.10.10infra-1pg-meta-1
210.10.10.11infra-2pg-meta-2
310.10.10.21etcd-1
410.10.10.22etcd-2
510.10.10.23etcd-3
610.10.10.31minio-1
710.10.10.32minio-2
810.10.10.33minio-3
910.10.10.34minio-4
1010.10.10.40pg-src-1
1110.10.10.41pg-src-2
1210.10.10.42pg-src-3
1310.10.10.50pg-test-1
1410.10.10.51pg-test-2
1510.10.10.52pg-test-3
16……

Last modified 2026-01-06: batch update (cc9e058)