R/W Separation: Unlimited read scaling

R/W Separation: Unlimited read scaling

Separate read/write through different ports, with replicas capable of cascading infinitely and automatic load balancing

Dedicated instances for analytics/ETL, separation of fast and slow queries to avoid conflicts, easily handling offline queries

  • Read-only service: Directs to read-only replicas with primary as fallback
  • Offline service: Directs to specially marked replicas or dedicated analytics instances
  • Production record: Implemented one primary with thirty replicas through cascading bridges
Connection Pooling: High concurrency made easy

Connection Pooling: High concurrency made easy

Built-in PGBouncer connection pool, ready out of the box - high concurrency challenges become a thing of the past

Transaction pooling enabled by default, greatly reducing concurrency contention while significantly improving overall throughput

  • Default transaction-level pooling converts thousands of concurrent connections to single-digit active connections
  • Enabled by default without configuration, automatically syncing database/connection pool object settings
  • Supports deployment of multiple pgbouncer instances to circumvent its bottlenecks
Load Balancing: Console-driven traffic control

Load Balancing: Console-driven traffic control

Through the HAProxy Web console, operations staff can monitor and schedule request traffic in real-time

Rolling drain of connections and requests enables seamless online migration, or manual takeover in emergencies

  • Stateless HAProxy can be scaled at will or deployed on dedicated servers
  • Weights can be adjusted via command line, draining instances to be retired, gradually warming up new members
  • Password-protected HAProxy graphical management interface exposed uniformly through Nginx
Horizontal Scaling: In-place switch to distributed

Horizontal Scaling: In-place switch to distributed

Citus provides an extension for horizontal scaling of distributed PostgreSQL clusters, supporting multi-write and multi-tenant

Transform existing clusters into distributed systems in-place, solving single-node write throughput and data capacity bottlenecks

  • Accelerate real-time OLAP analytics using multi-node parallel processing
  • Shard by row key or by schema, easily supporting multi-tenant scenarios
  • Online partition rebalancing, dynamically adjusting throughput capacity as needed
Storage Expansion: External tables with transparent compression

Storage Expansion: External tables with transparent compression

Utilize transparent compression capabilities from analytical extensions to achieve 10:1 or even higher columnar compression ratios

Use FDW and extensions to read and write data in object storage, implementing hot/cold separation and unlimited capacity expansion

Mass Deployment: Large clusters made easy

Mass Deployment: Large clusters made easy

Designed for extreme scale - equally flexible for 10,000-core clusters or single 1-core nodes

No limit on nodes per deployment - scale constrained only by monitoring system capacity

  • Batch operations at scale through Ansible, saying goodbye to console point-and-click
  • Largest production deployment record: 25,000 vCPU, 3,000+ instances
  • Unlimited monitoring system expansion through optional VictoriaMetrics distribution

Cloud Elasticity: Cloud-like elasticity

Supports cloud server deployment, fully leveraging the elastic advantages of cloud servers and cloud storage

Develop flexible multi-cloud strategies - enjoy cloud database elasticity at cloud server prices

  • Pigsty only needs cloud servers, works the same across any cloud provider
  • Unified deployment on-cloud and off-cloud, seamless switching between public, private, hybrid, and multi-cloud
  • Flexibly upgrade or downgrade compute and storage specifications, purchase or lease as needed: buy baseline, rent peak

PIGSTY