This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Pigsty Docs v4.0

PostgreSQL In Great STYle”: Postgres, Infras, Graphics, Service, Toolbox, it’s all Yours.

—— Battery-Included, Local-First PostgreSQL Distribution as an Free & Open-Source RDS

GtiHub | Demo | Blog | Discuss | Discord | DeepWiki | Roadmap | 中文文档

Get Started with the latest release: curl -fsSL https://repo.pigsty.io/get | bash


About: Features | History | Event | Community | Privacy Policy | License | Sponsor | Subscription

Setup: Install | Offline Install | Preparation | Configuration | Playbook | Provision | Security | FAQ

Concept: Architecture | Cluster Model | Monitoring | IaC | HA | PITR | Service Access | Security

Reference: Supported Linux | Fire Hierarchy | Parameters | Playbooks | Ports | Comparison | Cost

Modules: PGSQL | INFRA | NODE | ETCD | MINIO | REDIS | FERRET | DOCKER | APP

1 - PIGSTY

2 - About

Learn about Pigsty itself in every aspect - features, history, license, privacy policy, community events, and news.

2.1 - Features

Pigsty’s value propositions and highlight features.

PostgreSQL In Great STYle”: Postgres, Infras, Graphics, Service, Toolbox, it’s all Yours.

—— Battery-included, local-first PostgreSQL distribution, open-source RDS alternative


Value Propositions


Overview

Pigsty is a better local open-source RDS for PostgreSQL alternative:

  • Battery-Included RDS: From kernel to RDS distribution, providing production-grade PG database services for versions 13-18 on EL/Debian/Ubuntu.
  • Rich Extensions: Providing unparalleled 440+ extensions with out-of-the-box distributed, time-series, geospatial, graph, vector, multi-modal database capabilities.
  • Flexible Modular Architecture: Flexible composition, free extension: Redis/Etcd/MinIO/Mongo; can be used independently to monitor existing RDS/hosts/databases.
  • Stunning Observability: Based on modern observability stack Prometheus/Grafana, providing stunning, unparalleled database observability capabilities.
  • Battle-Tested Reliability: Self-healing high-availability architecture: automatic failover on hardware failure, seamless traffic switching. With auto-configured PITR as safety net for accidental data deletion!
  • Easy to Use and Maintain: Declarative API, GitOps ready, foolproof operation, Database/Infra-as-Code and management SOPs encapsulating management complexity!
  • Solid Security Practices: Encryption and backup all included, with built-in basic ACL best practices. As long as hardware and keys are secure, you don’t need to worry about database security!
  • Broad Application Scenarios: Low-code data application development, or use preset Docker Compose templates to spin up massive software using PostgreSQL with one click!
  • Open-Source Free Software: Own better database services at less than 1/10 the cost of cloud databases! Truly “own” your data and achieve autonomy!

PostgreSQL integrates ecosystem tools and best practices:

  • Out-of-the-box PostgreSQL distribution, deeply integrating 440+ extension plugins for geospatial, time-series, distributed, graph, vector, search, and AI!
  • Runs on bare operating systems without container support, supporting mainstream operating systems: EL 8/9/10, Ubuntu 22.04/24.04, and Debian 12/13.
  • Based on patroni, haproxy, and etcd, creating a self-healing high-availability architecture: automatic failover on hardware failure, seamless traffic switching.
  • Based on pgBackRest and optional MinIO clusters providing out-of-the-box PITR point-in-time recovery, serving as a safety net for software defects and accidental data deletion.
  • Based on Ansible providing declarative APIs to abstract complexity, greatly simplifying daily operations management in a Database-as-Code manner.
  • Pigsty has broad applications, can be used as complete application runtime, develop demo data/visualization applications, and massive software using PG can be spun up with Docker templates.
  • Provides Vagrant-based local development and testing sandbox environment, and Terraform-based cloud auto-deployment solutions, keeping development, testing, and production environments consistent.
  • Deploy and monitor dedicated Redis (primary-replica, sentinel, cluster), MinIO, Etcd, Haproxy, MongoDB (FerretDB) clusters

Battery-Included RDS

Get production-grade PostgreSQL database services locally immediately!

PostgreSQL is a near-perfect database kernel, but it needs more tools and systems to become a good enough database service (RDS). Pigsty helps PostgreSQL make this leap. Pigsty solves various challenges you’ll encounter when using PostgreSQL: kernel extension installation, connection pooling, load balancing, service access, high availability / automatic failover, log collection, metrics monitoring, alerting, backup recovery, PITR, access control, parameter tuning, security encryption, certificate issuance, NTP, DNS, parameter tuning, configuration management, CMDB, management playbooks… You no longer need to worry about these details!

Pigsty supports PostgreSQL 13 ~ 18 mainline kernels and other compatible forks, running on EL / Debian / Ubuntu and compatible OS distributions, available on x86_64 and ARM64 chip architectures, without container support required. Besides database kernels and many out-of-the-box extension plugins, Pigsty also provides complete infrastructure and runtime required for database services, as well as local sandbox / production environment / cloud IaaS auto-deployment solutions.

Pigsty can bootstrap an entire environment from bare metal with one click, reaching the last mile of software delivery. Ordinary developers and operations engineers can quickly get started and manage databases part-time, building enterprise-grade RDS services without database experts!

pigsty-arch.jpg


Rich Extensions

Hyper-converged multi-modal, use PostgreSQL for everything, one PG to replace all databases!

PostgreSQL’s soul lies in its rich extension ecosystem, and Pigsty uniquely deeply integrates 440+ extensions from the PostgreSQL ecosystem, providing you with an out-of-the-box hyper-converged multi-modal database!

Extensions can create synergistic effects, producing 1+1 far greater than 2 results. You can use PostGIS for geospatial data, TimescaleDB for time-series/event stream data analysis, and Citus to upgrade it in-place to a distributed geospatial-temporal database; You can use PGVector to store and search AI embeddings, ParadeDB for ElasticSearch-level full-text search, and simultaneously use precise SQL, full-text search, and fuzzy vector for hybrid search. You can also achieve dedicated OLAP database/data lakehouse analytical performance through pg_duckdb, pg_mooncake and other analytical extensions.

Using PostgreSQL as a single component to replace MySQL, Kafka, ElasticSearch, MongoDB, and big data analytics stacks has become a best practice — a single database choice can significantly reduce system complexity, greatly improve development efficiency and agility, achieving remarkable software/hardware and development/operations cost reduction and efficiency improvement.

pigsty-ecosystem.jpg


Flexible Modular Architecture

Flexible composition, free extension, multi-database support, monitor existing RDS/hosts/databases

Components in Pigsty are abstracted as independently deployable modules, which can be freely combined to address varying requirements. The INFRA module comes with a complete modern monitoring stack, while the NODE module tunes nodes to desired state and brings them under management. Installing the PGSQL module on multiple nodes automatically forms a high-availability database cluster based on primary-replica replication, while the ETCD module provides consensus and metadata storage for database high availability.

Beyond these four core modules, Pigsty also provides a series of optional feature modules: The MINIO module can provide local object storage capability and serve as a centralized database backup repository. The REDIS module can provide auxiliary services for databases in standalone primary-replica, sentinel, or native cluster modes. The DOCKER module can be used to spin up stateless application software.

Additionally, Pigsty provides PG-compatible / derivative kernel support. You can use Babelfish for MS SQL Server compatibility, IvorySQL for Oracle compatibility, OpenHaloDB for MySQL compatibility, and OrioleDB for ultimate OLTP performance.

Furthermore, you can use FerretDB for MongoDB compatibility, Supabase for Firebase compatibility, and PolarDB to meet domestic compliance requirements. More professional/pilot modules will be continuously introduced to Pigsty, such as GPSQL, KAFKA, DUCKDB, TIGERBEETLE, KUBERNETES, CONSUL, GREENPLUM, CLOUDBERRY, MYSQL, …

pigsty-sandbox.jpg


Stunning Observability

Using modern open-source observability stack, providing unparalleled monitoring best practices!

Pigsty provides best practices for monitoring based on the open-source Grafana / Prometheus modern observability stack: Grafana for visualization, VictoriaMetrics for metrics collection, VictoriaLogs for log collection and querying, Alertmanager for alert notifications. Blackbox Exporter for checking service availability. The entire system is also designed for one-click deployment as the out-of-the-box INFRA module.

Any component managed by Pigsty is automatically brought under monitoring, including host nodes, load balancer HAProxy, database Postgres, connection pool Pgbouncer, metadata store ETCD, KV cache Redis, object storage MinIO, …, and the entire monitoring infrastructure itself. Numerous Grafana monitoring dashboards and preset alert rules will qualitatively improve your system observability capabilities. Of course, this system can also be reused for your application monitoring infrastructure, or for monitoring existing database instances or RDS.

Whether for failure analysis or slow query optimization, capacity assessment or resource planning, Pigsty provides comprehensive data support, truly achieving data-driven operations. In Pigsty, over three thousand types of monitoring metrics are used to describe all aspects of the entire system, and are further processed, aggregated, analyzed, refined, and presented in intuitive visualization modes. From global overview dashboards to CRUD details of individual objects (tables, indexes, functions) in a database instance, everything is visible at a glance. You can drill down, roll up, or jump horizontally freely, browsing current system status and historical trends, and predicting future evolution.

pigsty-dashboard.jpg

Additionally, Pigsty’s monitoring system module can be used independently — to monitor existing host nodes and database instances, or cloud RDS services. With just one connection string and one command, you can get the ultimate PostgreSQL observability experience.

Visit the Screenshot Gallery and Online Demo for more details.


Battle-Tested Reliability

Out-of-the-box high availability and point-in-time recovery capabilities ensure your database is rock-solid!

For table/database drops caused by software defects or human error, Pigsty provides out-of-the-box PITR point-in-time recovery capability, enabled by default without additional configuration. As long as storage space allows, base backups and WAL archiving based on pgBackRest give you the ability to quickly return to any point in the past. You can use local directories/disks, or dedicated MinIO clusters or S3 object storage services to retain longer recovery windows, according to your budget.

More importantly, Pigsty makes high availability and self-healing the standard for PostgreSQL clusters. The high-availability self-healing architecture based on patroni, etcd, and haproxy lets you handle hardware failures with ease: RTO < 30s for primary failure automatic failover (configurable), with zero data loss RPO = 0 in consistency-first mode. As long as any instance in the cluster survives, the cluster can provide complete service, and clients only need to connect to any node in the cluster to get full service.

Pigsty includes built-in HAProxy load balancers for automatic traffic switching, providing DNS/VIP/LVS and other access methods for clients. Failover and active switchover are almost imperceptible to the business side except for brief interruptions, and applications don’t need to modify connection strings or restart. The minimal maintenance window requirements bring great flexibility and convenience: you can perform rolling maintenance and upgrades on the entire cluster without application coordination. The feature that hardware failures can wait until the next day to handle lets developers, operations, and DBAs sleep well. Many large organizations and core institutions have been using Pigsty in production for extended periods. The largest deployment has 25K CPU cores and 200+ PostgreSQL ultra-large instances; in this deployment case, dozens of hardware failures and various incidents occurred over six to seven years, DBAs changed several times, but still maintained availability higher than 99.999%.

pigsty-ha.png


Easy to Use and Maintain

Infra as Code, Database as Code, declarative APIs encapsulate database management complexity.

Pigsty provides services through declarative interfaces, elevating system controllability to a new level: users tell Pigsty “what kind of database cluster I want” through configuration inventories, without worrying about how to do it. In effect, this is similar to CRDs and Operators in K8S, but Pigsty can be used for databases and infrastructure on any node: whether containers, virtual machines, or physical machines.

Whether creating/destroying clusters, adding/removing replicas, or creating new databases/users/services/extensions/whitelist rules, you only need to modify the configuration inventory and run the idempotent playbooks provided by Pigsty, and Pigsty adjusts the system to your desired state. Users don’t need to worry about configuration details — Pigsty automatically tunes based on machine hardware configuration. You only need to care about basics like cluster name, how many instances on which machines, what configuration template to use: transaction/analytics/critical/tiny — developers can also self-serve. But if you’re willing to dive into the rabbit hole, Pigsty also provides rich and fine-grained control parameters to meet the demanding customization needs of the most meticulous DBAs.

Beyond that, Pigsty’s own installation and deployment is also one-click foolproof, with all dependencies pre-packaged, requiring no internet access during installation. The machine resources needed for installation can also be automatically obtained through Vagrant or Terraform templates, allowing you to spin up a complete Pigsty deployment from scratch on a local laptop or cloud VM in about ten minutes. The local sandbox environment can run on a 1-core 2GB micro VM, providing the same functional simulation as production environments, usable for development, testing, demos, and learning.

pigsty-iac.jpg


Solid Security Practices

Encryption and backup all included. As long as hardware and keys are secure, you don’t need to worry about database security.

Pigsty is designed for high-standard, demanding enterprise scenarios, adopting industry-leading security best practices to protect your data security (confidentiality/integrity/availability). The default configuration’s security is sufficient to meet compliance requirements for most scenarios.

Pigsty creates self-signed CAs (or uses your provided CA) to issue certificates and encrypt network communication. Sensitive management pages and API endpoints that need protection are password-protected. Database backups use AES encryption, database passwords use scram-sha-256 encryption, and plugins are provided to enforce password strength policies. Pigsty provides an out-of-the-box, easy-to-use, easily extensible ACL model, providing read/write/admin/ETL permission distinctions, with HBA rule sets following the principle of least privilege, ensuring system confidentiality through multiple layers of protection.

Pigsty enables database checksums by default to avoid silent data corruption, with replicas providing bad block fallback. Provides CRIT zero-data-loss configuration templates, using watchdog to ensure HA fencing as a fallback. You can audit database operations through the audit plugin, with all system and database logs collected for reference to meet compliance requirements.

Pigsty correctly configures SELinux and firewall settings, and follows the principle of least privilege in designing OS user groups and file permissions, ensuring system security baselines meet compliance requirements. Security is also uncompromised for auxiliary optional components like Etcd and MinIO — both use RBAC models and TLS encrypted communication, ensuring overall system security.

A properly configured system can easily pass MLPS Level 3 / SOC 2. As long as you follow security best practices, deploy on internal networks with properly configured security groups and firewalls, database security will no longer be your pain point.

pigsty-acl.jpg


Broad Application Scenarios

Use preset Docker templates to spin up massive software using PostgreSQL with one click!

In various data-intensive applications, the database is often the trickiest part. For example, the core difference between GitLab Enterprise and Community Edition is the underlying PostgreSQL database monitoring and high availability. If you already have a good enough local PG RDS, you can refuse to pay for software’s homemade database components.

Pigsty provides the Docker module and many out-of-the-box Compose templates. You can use Pigsty-managed high-availability PostgreSQL (as well as Redis and MinIO) as backend storage, spinning up these software in stateless mode with one click: GitLab, Gitea, Wiki.js, NocoDB, Odoo, Jira, Confluence, Harbor, Mastodon, Discourse, KeyCloak, Mattermost, etc. If your application needs a reliable PostgreSQL database, Pigsty is perhaps the simplest way to get one.

Pigsty also provides application development toolsets closely related to PostgreSQL: PGAdmin4, PGWeb, ByteBase, PostgREST, Kong, as well as EdgeDB, FerretDB, Supabase — these “upper-layer databases” using PostgreSQL as storage. More wonderfully, you can build interactive data applications quickly in a low-code manner based on the Grafana and Postgres built into Pigsty, and even use Pigsty’s built-in ECharts panels to create more expressive interactive visualization works.

Pigsty provides a powerful runtime for your AI applications. Your agents can leverage PostgreSQL and the powerful capabilities of the observability world in this environment to quickly build data-driven intelligent agents.

pigsty-app.jpg


Open-Source Free Software

Pigsty is free software open-sourced under Apache-2.0, watered by the passion of PostgreSQL-loving community members

Pigsty is completely open-source and free software, allowing you to run enterprise-grade PostgreSQL database services at nearly pure hardware cost without database experts. For comparison, database vendors’ “enterprise database services” and public cloud vendors’ RDS charge premiums several to over ten times the underlying hardware resources as “service fees.”

Many users choose the cloud precisely because they can’t handle databases themselves; many users use RDS because there’s no other choice. We will break cloud vendors’ monopoly, providing users with a cloud-neutral, better open-source RDS alternative: Pigsty follows PostgreSQL upstream closely, with no vendor lock-in, no annoying “licensing fees,” no node count limits, and no data collection. All your core assets — data — can be “autonomously controlled,” in your own hands.

Pigsty itself aims to replace tedious manual database operations with database autopilot software, but even the best software can’t solve all problems. There will always be some rare, low-frequency edge cases requiring expert intervention. This is why we also provide professional subscription services to provide safety nets for enterprise users who need them. Subscription consulting fees of tens of thousands are less than one-thirtieth of a top DBA’s annual salary, completely eliminating your concerns and putting costs where they really matter. For community users, we also contribute with love, providing free support and daily Q&A.

pigsty-price.jpg

2.2 - History

The origin and motivation of the Pigsty project, its development history, and future goals and vision.

Historical Origins

The Pigsty project began in 2018-2019, originating from Tantan. Tantan is an internet dating app — China’s Tinder, now acquired by Momo. Tantan was a Nordic-style startup with a Swedish engineering founding team.

Tantan had excellent technical taste, using PostgreSQL and Go as its core technology stack. The entire Tantan system architecture was modeled after Instagram, designed entirely around the PostgreSQL database. Up to several million daily active users, millions of TPS, and hundreds of TB of data, the data component used only PostgreSQL. Almost all business logic was implemented using PG stored procedures — even including 100ms recommendation algorithms! It was arguably the most complex PostgreSQL-at-scale use case in China at the time.

This atypical development model of deeply using PostgreSQL features placed extremely high demands on the capabilities of engineers and DBAs. And Pigsty is the open-source project we forged in this real-world large-scale, high-standard database cluster scenario — embodying our experience and best practices as top PostgreSQL experts.


Development Process

In the beginning, Pigsty did not have the vision, goals, and scope it has today. It started as a PostgreSQL monitoring system for our own use. We surveyed all available solutions — open-source, commercial, cloud-based, datadog, pgwatch, etc. — and none could meet our observability needs. So I decided to build one myself based on Grafana and Prometheus. This became Pigsty’s predecessor and prototype. Pigsty as a monitoring system was quite impressive, helping us solve countless management problems.

Subsequently, developers wanted such a monitoring system on their local development machines, so we used Ansible to write provisioning playbooks, transforming this system from a one-time construction task into reusable, replicable software. New versions allowed users to use Vagrant and Terraform, using Infrastructure as Code to quickly spin up local DevBox development machines or production environment servers, automatically completing PostgreSQL and monitoring system deployment.

Next, we redesigned the production environment PostgreSQL architecture, introducing Patroni and pgBackRest to solve database high availability and point-in-time recovery issues. We developed a zero-downtime migration solution based on logical replication, rolling upgrading two hundred production database clusters to the latest major version through blue-green deployment. And we incorporated these capabilities into Pigsty.

Pigsty is software we built for ourselves. The biggest benefit of “eating our own dog food” is that we are both developers and users — as client users, we know exactly what we need, do not cut corners, and never worry about automating ourselves out of jobs.

We solved problem after problem, depositing the solutions into Pigsty. Pigsty’s positioning also gradually evolved from a monitoring system into an out-of-the-box PostgreSQL database distribution. We then decided to open-source Pigsty and began a series of technical sharing and publicity, and external users from various industries began using Pigsty and providing feedback.


Full-Time Entrepreneurship

In 2022, the Pigsty project received seed funding from Miracle Plus, initiated by Dr. Qi Lu, allowing me to work on this full-time.

As an open-source project, Pigsty has developed quite well. In these years of full-time work, Pigsty’s GitHub stars have grown from a few hundred to 4,600+; it made the HN front page, and growth began snowballing. In November 2025, Pigsty won the Magneto Award at the PostgreSQL Ecosystem Conference. In 2026, Pigsty’s subproject PGEXT.CLOUD was selected for a PGCon.Dev 2026 talk. Pigsty became the first Chinese open-source project to appear on the stage of this core PostgreSQL ecosystem conference.

Previously, Pigsty could only run on CentOS 7, but now it covers all mainstream Linux distributions (EL, Debian, Ubuntu) across 14 operating system platforms. Supported PG major versions cover 13-18, and we maintain and integrate 444 extension plugins in the PG ecosystem. Among these, I personally maintain over half (270+) of the extension plugins, providing out-of-the-box RPM/DEB packages. Including Pigsty itself, “based on open source, giving back to open source,” this is our way of contributing to the PG ecosystem.

Pigsty’s positioning has also continuously evolved from a PostgreSQL database distribution to an open-source cloud database. It truly benchmarks against cloud vendors’ entire cloud database brands.


Rebel Against Public Clouds

Public cloud vendors like AWS, Azure, GCP, and Aliyun have provided many conveniences for startups, but they are closed-source and force users to rent infrastructure at exorbitant fees.

We believe that excellent database services, like excellent database kernels, should be accessible to every user, rather than requiring expensive rental from cyber lords.

Cloud computing’s agility and elasticity value proposition is strong, but it should be free, open-source, inclusive, and local-first — We believe the cloud computing universe needs a solution representing open-source values that returns infrastructure control to users without sacrificing the benefits of the cloud.

Therefore, we are also leading a movement and battle to exit the cloud, as rebels against public clouds, to reshape the industry’s values.


Our Vision

I hope that in the future world, everyone will have the de facto right to freely use excellent services, rather than being confined to a few cyber lord public cloud giants’ territories as cyber tenants or even cyber serfs.

This is exactly what Pigsty aims to do — a better, free and open-source RDS alternative. Allowing users to spin up database services better than cloud RDS anywhere (including cloud servers) with one click.

Pigsty is a complete complement to PostgreSQL, and a spicy mockery of cloud databases. It literally means “pigsty,” but it’s also an acronym for Postgres In Great STYle, meaning “PostgreSQL in its full glory.”

Pigsty itself is completely open-source and free software, so you can build a PostgreSQL service that scores 90 without database experts. We sustain operations by providing premium consulting services to take you from 90 to 100, with warranty, Q&A, and a safety net.

A well-built system may run for years without needing a “safety net,” but database problems, once they occur, are never small. Often, expert experience can turn decay into magic, and we provide such premium consulting — we believe this is a more just, reasonable, and sustainable model.


About the Team

I am Feng Ruohang, the author of Pigsty. Almost all of Pigsty’s code is developed by me alone.

Individual heroism still exists in the software field. Only unique individuals can create unique works — I hope Pigsty becomes such a work.

If you’re interested in me, here’s my personal homepage: https://vonng.com/

Modb Interview with Feng Ruohang” (Chinese)

Post-90s, Quit to Start Business, Says Will Crush Cloud Databases” (Chinese)




2.3 - News & Events

News and events related to Pigsty and PostgreSQL, including latest announcements!

Recent News


Conferences & Talks

DateTypeEventTopic
2025-11-29Award&TalkThe 8th Conf of PG Ecosystem (Hangzhou)PostgreSQL Magneto Award, A World-Grade Postgres Meta Distribution
2025-05-16LightningPGConf.Dev 2025, MontrealExtension Delivery: Make your PGEXT accessible to users
2025-05-12KeynotePGEXT.DAY, PGCon.Dev 2025The Missing Package Manager and Extension Repo for PostgreSQL Ecosystem
2025-04-19WorkshopPostgreSQL Database Technology SummitUsing Pigsty to Deploy PG Ecosystem Partners: Dify, Odoo, Supabase
2025-04-11Live HostOSCHINA Data Intelligence TalkIs the Viral MCP Hype or Revolutionary?
2025-01-15Live StreamOpen Source Veterans & Newcomers Episode 4PostgreSQL Extensions Devouring DB World? PG Package Manager pig & Self-hosted RDS
2025-01-09AwardOSCHINA 2024 Outstanding Contribution ExpertOutstanding Contribution Expert Award
2025-01-06PanelChina PostgreSQL Database Ecosystem ConferencePostgreSQL Extensions are Devouring the Database World
2024-11-23PodcastTech Hotpot PodcastFrom the Linux Foundation: Why the Recent Focus on ‘Chokepoints’?
2024-08-21InterviewBlue Tech WaveInterview with Feng Ruohang: Simplifying PG Management
2024-08-15Tech SummitGOTC Global Open Source Technology SummitPostgreSQL AI/ML/RAG Extension Ecosystem and Best Practices
2024-07-12Keynote13th PG China Technical ConferenceThe Future of Database World: Extensions, Service, and Postgres
2024-05-31UnconferencePGCon.Dev 2024 Global PG Developer ConferenceBuilt-in Prometheus Metrics Exporter
2024-05-28SeminarPGCon.Dev 2024 Extension SummitExtension in Core & Binary Packing
2024-05-10Live DebateThree-way Talk: Cloud Mudslide Series Episode 3Is Public Cloud a Scam?
2024-04-17Live DebateThree-way Talk: Cloud Mudslide Series Episode 2Are Cloud Databases a Tax on Intelligence?
2024-04-16PanelCloudflare Immerse ShenzhenCyber Bodhisattva Panel Discussion
2024-04-12Tech Summit2024 Data Technology CarnivalPigsty: Solving PostgreSQL Operations Challenges
2024-03-31Live DebateThree-way Talk: Cloud Mudslide Series Episode 1Luo Selling Cloud While We’re Moving Off Cloud?
2024-01-24Live HostOSCHINA Open Source Talk Episode 9Will DBAs Be Eliminated by Cloud?
2023-12-20Live DebateOpen Source Talk Episode 7To Cloud or Not: Cost Cutting or Value Creation?
2023-11-24Tech SummitVector Databases in the LLM EraPanel: New Future of Vector Databases in the AI Age
2023-09-08InterviewMotianlun Feature InterviewFeng Ruohang: A Tech Enthusiast Who Makes Great Open Source Founders
2023-08-16Tech SummitDTCC 2023DBA Night: PostgreSQL vs MySQL Open Source License Issues
2023-08-09Live DebateOpen Source Talk Episode 1MySQL vs PostgreSQL: Which is World’s No.1?
2023-07-01Tech SummitSACC 2023Workshop 8: FinOps Practice: Cloud Cost Management & Optimization
2023-05-12MeetupPostgreSQL China Wenzhou MeetupPG With DB4AI: Vector Database PGVECTOR & AI4DB: Self-Driving Database Pigsty
2023-04-08Tech SummitDatabase Carnival 2023A Better Open Source RDS Alternative: Pigsty
2023-04-01Tech SummitPostgreSQL China Xi’an MeetupPG High Availability & Disaster Recovery Best Practices
2023-03-23Live StreamBytebase x PigstyBest Practices for Managing PostgreSQL: Bytebase x Pigsty
2023-03-04Tech SummitPostgreSQL China ConferenceChallenging RDS, Pigsty v2.0 Release
2023-02-01Tech SummitDTCC 2022Open Source RDS Alternative: Battery-Included, Self-Driving Database Distro Pigsty
2022-07-21Live DebateCloud Swallows Open SourceCan Open Source Strike Back Against Cloud?
2022-07-04InterviewCreator’s StoryPost-90s Developer Quits to Start Up, Aiming to Challenge Cloud Databases
2022-06-28Live StreamBass’s RoundtableDBA’s Gospel: SQL Audit Best Practices
2022-06-12Demo DayMiraclePlus S22 Demo DayUser-Friendly Cost-Effective Database Distribution Pigsty
2022-06-05Live StreamPG Chinese Community SharingPigsty v1.5 Quick Start, New Features & Production Cluster Setup

2.4 - Roadmap

Future feature planning, new feature release schedule, and todo list.

Release Strategy

Pigsty uses semantic versioning: <major>.<minor>.<patch>. Alpha/Beta/RC versions will have suffixes like -a1, -b1, -c1 appended to the version number.

Major version updates signify incompatible foundational changes and major new features; minor version updates typically indicate regular feature updates and small API changes; patch version updates mean bug fixes and package version updates.

Pigsty plans to release one major version update per year. Minor version updates usually follow PostgreSQL’s minor version update rhythm, catching up within a month at the latest after a new PostgreSQL version is released. Pigsty typically plans 4-6 minor versions per year. For complete release history, please refer to Release Notes.


Features Under Consideration

  • Agent Native CLI - PIG
  • DBA Agent - basic integration
  • Grafana dashboard improvements

Here are our Active Issues and Roadmap.


Extensions and Packages

For the extension support roadmap, you can find it here: https://pgext.cloud/e/roadmap

Under Consideration

Not Considering for Now

2.5 - Join the Community

Pigsty is a Build in Public project. We are very active on GitHub, and Chinese users are mainly active in WeChat groups.

GitHub

Our GitHub repository is: https://github.com/pgsty/pigsty. Please give us a ⭐️ star!

We welcome anyone to submit new Issues or create Pull Requests, propose feature suggestions, and contribute to Pigsty.

Star History Chart

Please note that for issues related to Pigsty documentation, please submit Issues in the github.com/pgsty/pigsty.cc repository.


WeChat Groups

Chinese users are mainly active in WeChat groups. Currently, there are seven active groups. Groups 1-4 are full; for other groups, you need to add the assistant’s WeChat to be invited.

To join the WeChat community, search for “Pigsty小助手” (WeChat ID: pigsty-cc), note or send “加群” (join group), and the assistant will invite you to the group.


International Community

Telegram: https://t.me/joinchat/gV9zfZraNPM3YjFh

Discord: https://discord.gg/j5pG8qfKxU

You can also contact me via email: [email protected]


Community Help

When you encounter problems using Pigsty, you can seek help from the community. The more information you provide, the more likely you are to get help from the community.

Please refer to the Community Help Guide and provide as much information as possible so that community members can help you solve the problem. Here is a reference template for asking for help:

What happened? (Required)

Pigsty version and OS version (Required)

$ grep version pigsty.yml

$ cat /etc/os-release

$ uname -a

Some cloud providers have customized standard OS distributions. You can tell us which cloud provider’s OS image you are using. If you have customized and modified the environment after installing the OS, or if there are specific security rules and firewall configurations in your LAN, please also inform us when asking questions.

Pigsty configuration file

Please don’t forget to redact any sensitive information: passwords, internal keys, sensitive configurations, etc.

cat ~/pigsty/pigsty.yml

What did you expect to happen?

Please describe what should happen under normal circumstances, and how the actual situation differs from expectations.

How to reproduce this issue?

Please tell us in as much detail as possible how to reproduce this issue.

Monitoring screenshots

If you are using the monitoring system provided by Pigsty, you can provide relevant screenshots.

Error logs

Please provide logs related to the error as much as possible. Please do not paste content like “Failed to start xxx service” that has no informational value.

You can query logs from Grafana / VictoriaLogs, or get logs from the following locations:

  • Syslog: /var/log/messages (rhel) or /var/log/syslog (debian)
  • Postgres: /pg/log/postgres/*
  • Patroni: /pg/log/patroni/*
  • Pgbouncer: /pg/log/pgbouncer/*
  • Pgbackrest: /pg/log/pgbackrest/*
journalctl -u patroni
journalctl -u <service name>

Have you searched Issues/Website/FAQ?

In the FAQ, we provide answers to many common questions. Please check before asking.

You can also search for related issues from GitHub Issues and Discussions:

Is there any other information we need to know?

The more information and context you provide, the more likely we can help you solve the problem.

2.6 - Privacy Policy

What user data does Pigsty software and website collect, and how will we process your data and protect your privacy?

Pigsty Software

When you install Pigsty software, if you use offline package installation in a network-isolated environment, we will not receive any data about you.

If you choose online installation, when downloading related packages, our servers or cloud provider servers will automatically log the visiting machine’s IP address and/or hostname in the logs, along with the package names you downloaded.

We will not share this information with other organizations unless required by law. (Honestly, we’d have to be really bored to look at this stuff.)

Pigsty’s primary domain is: pigsty.io. For mainland China, please use the registered mirror site pigsty.cc.


Pigsty Website

When you visit our website, our servers will automatically log your IP address and/or hostname in Nginx logs.

We will only store information such as your email address, name, and location when you decide to send us such information by completing a survey or registering as a user on one of our websites.

We collect this information to help us improve website content, customize web page layouts, and contact people for technical and support purposes. We will not share your email address with other organizations unless required by law.

This website uses Google Analytics, a web analytics service provided by Google, Inc. (“Google”). Google Analytics uses “cookies,” which are text files placed on your computer to help the website analyze how users use the site.

The information generated by the cookie about your use of the website (including your IP address) will be transmitted to and stored by Google on servers in the United States. Google will use this information to evaluate your use of the website, compile reports on website activity for website operators, and provide other services related to website activity and internet usage. Google may also transfer this information to third parties if required by law or where such third parties process the information on Google’s behalf. Google will not associate your IP address with any other data held by Google. You may refuse the use of cookies by selecting the appropriate settings on your browser, however, please note that if you do this, you may not be able to use the full functionality of this website. By using this website, you consent to the processing of data about you by Google in the manner and for the purposes set out above.

If you have any questions or comments about this policy, or request deletion of personal data, you can contact us by sending an email to [email protected]




2.7 - License

Pigsty’s open-source licenses — Apache-2.0 and CC BY 4.0

License Summary

Pigsty core uses Apache-2.0; documentation uses CC BY 4.0.

Official License: https://github.com/pgsty/pigsty/blob/main/LICENSE


Pigsty Core

The Pigsty core is licensed under Apache License 2.0.

Apache-2.0 is a permissive open-source license. You may freely use, modify, and distribute the software for commercial purposes without opening your own source code or adopting the same license.

What This License GrantsWhat This License Does NOT GrantLicense Conditions
Commercial use Trademark use Include license and copyright notice
Modification Liability & warranty State changes
Distribution
Patent grant
Private use

Pigsty Documentation

Pigsty documentation sites (pigsty.cc, pigsty.io, pgsty.com) use Creative Commons Attribution 4.0 International (CC BY 4.0).

CC BY 4.0 permits free sharing and adaptation with appropriate credit, a license link, and indication of changes.

What This License GrantsWhat This License Does NOT GrantLicense Conditions
Commercial use Trademark use Attribution
Modification Liability & warranty Indicate changes
Distribution Patent grant Provide license link
Private use

SBOM Inventory

Open-source software used or related to the Pigsty project.

For PostgreSQL extension plugin licenses, refer to PostgreSQL Extension License List.

ModuleSoftware NameLicensePurpose & DescriptionNecessity
PGSQLPostgreSQLPostgreSQL LicensePostgreSQL kernelRequired
PGSQLpatroniMIT LicensePostgreSQL high availabilityRequired
ETCDetcdApache License 2.0HA consensus and distributed config storageRequired
INFRAAnsibleGPLv3Executes playbooks and management commandsRequired
INFRANginxBSD-2Exposes Web UI and serves local repoRecommended
PGSQLpgbackrestMIT LicensePITR backup/recovery managementRecommended
PGSQLpgbouncerISC LicensePostgreSQL connection poolingRecommended
PGSQLvip-managerBSD 2-Clause LicenseAutomatic L2 VIP binding to PG primaryRecommended
PGSQLpg_exporterApache License 2.0PostgreSQL and PgBouncer monitoringRecommended
NODEnode_exporterApache License 2.0Host node monitoring metricsRecommended
NODEhaproxyHAPROXY’s License (GPLv2)Load balancing and service exposureRecommended
INFRAGrafanaAGPLv3Database visualization platformRecommended
INFRAVictoriaMetricsApache License 2.0TSDB, metric collection, alertingRecommended
INFRAVictoriaLogsApache License 2.0Centralized log collection, storage, queryRecommended
INFRADNSMASQGPLv2 / GPLv3DNS resolution and cluster name lookupRecommended
MINIOMinIOAGPLv3S3-compatible object storage serviceOptional
NODEkeepalivedMIT LicenseVIP binding on node clustersOptional
REDISRedisRedis License (BSD-3)Cache service, locked at 7.2.6Optional
REDISRedis ExporterMIT LicenseRedis monitoringOptional
MONGOFerretDBApache License 2.0MongoDB compatibility over PostgreSQLOptional
DOCKERdocker-ceApache License 2.0Container managementOptional
CLOUDSealOSApache License 2.0Fast K8S cluster deployment and packagingOptional
DUCKDBDuckDBMITHigh-performance analyticsOptional
ExternalVagrantBusiness Source License 1.1Local test environment VMsOptional
ExternalTerraformBusiness Source License 1.1One-click cloud resource provisioningOptional
ExternalVirtualboxGPLv2Virtual machine management softwareOptional

Necessity Levels:

  • Required: Essential core capabilities, no option to disable
  • Recommended: Enabled by default, can be disabled via configuration
  • Optional: Not enabled by default, can be enabled via configuration

Apache-2.0 License Text

                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright (C) 2018-2026  Ruohang Feng, @Vonng ([email protected])

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

2.8 - Sponsor Us

Pigsty sponsors and investors list - thank you for your support of this project!

Pigsty is a free and open-source software, passionately developed by PostgreSQL community members, aiming to integrate the power of the PostgreSQL ecosystem and promote the widespread adoption of PostgreSQL. If our work has helped you, please consider sponsoring or supporting our project:

  • Sponsor us directly with financial support - express your sincere support in the most direct and powerful way!
  • Consider purchasing our Technical Support Services. We can provide professional PostgreSQL high-availability cluster deployment and maintenance services, making your budget worthwhile!
  • Share your Pigsty use cases and experiences through articles, talks, and videos.
  • Allow us to mention your organization in “Users of Pigsty.”
  • Recommend/refer our project and services to friends, colleagues, and clients in need.
  • Follow our WeChat Official Account and share relevant technical articles to groups and your social media.

Angel Investors

Pigsty is a project invested by Miracle Plus (formerly YC China) S22. We thank Miracle Plus and Dr. Qi Lu for their support of this project!


Sponsors

Special thanks to Vercel for sponsoring pigsty and hosting the Pigsty website.

Vercel OSS Program

2.9 - User Cases

Pigsty customer and application cases across various domains and industries

According to Google Analytics PV and download statistics, Pigsty currently has approximately 100,000 users, with half from mainland China and half from other regions globally. They span across multiple industries including internet, cloud computing, finance, autonomous driving, manufacturing, tech innovation, ISV, and defense. If you are using Pigsty and are willing to share your case and Logo with us, please contact us - we offer one free consultation session as a token of appreciation.

Internet

Tantan: 200+ physical machines for PostgreSQL and Redis services

Bilibili: Supporting PostgreSQL innovative business

Cloud Vendors

Bitdeer: Providing PG DBaaS

Oracle OCI: Using Pigsty to deliver PostgreSQL clusters

Finance

AirWallex: Monitoring 200+ GCP PostgreSQL databases

Media & Entertainment

Media Storm: Self-hosted PG RDS / Victoria Metrics

Autonomous Driving

Momenta: Autonomous driving, managing self-hosted PostgreSQL clusters

Manufacturing

Huafon Group: Using Pigsty to deliver PostgreSQL clusters as chemical industry time-series data warehouse

Tech Innovation

Beijing Lingwu Technology: Migrating PostgreSQL from cloud to self-hosted

Motphys: Self-hosted PostgreSQL supporting GitLab

Sailong Biotech: Self-hosted Supabase

Hangzhou Lingma Technology: Self-hosted PostgreSQL

ISV

Inner Mongolia Haode Tianmu Technology Co., Ltd.

Shanghai Yuanfang

DSG

2.10 - Subscription

Pigsty Professional/Enterprise subscription service - When you encounter difficulties related to PostgreSQL and Pigsty, our subscription service provides you with comprehensive support.

Pigsty aims to unite the power of the PostgreSQL ecosystem and help users make the most of the world’s most popular database, PostgreSQL, with self-driving database management software.

While Pigsty itself has already resolved many issues in PostgreSQL usage, achieving truly enterprise-grade service quality requires expert support and comprehensive coverage from the original provider. We deeply understand the importance of professional commercial support for enterprise customers. Therefore, Pigsty Enterprise Edition provides a series of value-added services on top of the open-source version, helping users better utilize PostgreSQL and Pigsty for customers to choose according to their needs.

If you have any of the following needs, please consider Pigsty subscription service:

  • Running databases in critical scenarios requiring strict SLA guarantees and comprehensive coverage.
  • Need comprehensive support for complex issues related to Pigsty and PostgreSQL.
  • Seeking guidance on PostgreSQL/Pigsty production environment best practices.
  • Want experts to help interpret monitoring dashboards, analyze and identify performance bottlenecks and fault root causes, and provide recommendations.
  • Need to plan database architectures that meet security/disaster recovery/compliance requirements based on existing resources and business needs.
  • Need to migrate from other databases to PostgreSQL, or migrate and transform legacy instances.
  • Building an observability system, data dashboards, and visualization applications based on the Prometheus/Grafana technology stack.
  • Migrating off cloud and seeking open-source alternatives to RDS for PostgreSQL - cloud-neutral, vendor lock-in-free solutions.
  • Want professional support for Redis/ETCD/MinIO, as well as extensions like TimescaleDB/Citus.
  • Want to perform secondary development and OEM branding with explicit commercial authorization.
  • Want to sell Pigsty as SaaS/PaaS/DBaaS, or provide technical services/consulting/cloud services based on this distribution.

Subscription Plans

In addition to the Open Source Edition, Pigsty offers two different subscription service tiers: Professional Edition and Enterprise Edition, which you can choose based on your actual situation and needs.

Pigsty Open Source Edition (OSS)
Free and Open Source
No scale limit, no warranty

License: Apache-2.0

PG Support: 18

Architecture Support: x86_64

OS Support: Latest minor versions of three families

  • EL 9.4
  • Debian 12.7
  • Ubuntu 22.04.5

Features: Core Modules

SLA: No SLA commitment

Community support Q&A:

Support: No person-day support option

Repository: Global Cloudflare hosted repository

Pigsty Professional Edition (PRO)
Starting Price: ¥150,000 / year
Default choice for regular users

License: Commercial License

PG Support: 17, 18

Architecture Support: x86_64, Arm64

OS Support: Five families major/minor versions

  • EL 8 / 9 compatible
  • Debian 12
  • Ubuntu 22 / 24

Features: All Modules (except 信创)

SLA: Response within business hours

Expert consulting services:

  • Software bug fixes
  • Complex issue analysis
  • Expert ticket support

Support: 1 person-day included per year

Delivery: Standard offline software package

Repository: China mainland mirror sites

Pigsty Enterprise Edition (ENTERPRISE)
Starting Price: ¥400,000 / year
Critical scenarios with strict SLA

License: Commercial License

PG Support: 12 - 18+

Architecture Support: x86_64, Arm64

OS Support: Customized on demand

  • EL, Debian, Ubuntu
  • Cloud Linux operating systems
  • Domestic OS and ARM

Features: All Modules

SLA: 7 x 24 (< 1h)

Enterprise-level expert consulting services:

  • Software bug fixes
  • Complex issue analysis
  • Expert Q&A support
  • Backup compliance advice
  • Upgrade path support
  • Performance bottleneck identification
  • Annual architecture review
  • Extension plugin integration
  • DBaaS & OEM use cases

Support: 2 person-days included per year

Repository: China mainland mirror sites

Delivery: Customized offline software package

信创: PolarDB-O support


Pigsty Open Source Edition (OSS)

Pigsty Open Source Edition uses the Apache-2.0 license, provides complete core functionality, requires no fees, but does not guarantee any warranty service. If you find defects in Pigsty, we welcome you to submit an Issue on Github.

For the open source version, we provide pre-built standard offline software packages for PostgreSQL 18 on the latest minor versions of three specific operating system distributions: EL 9.4, Debian 12.7, Ubuntu 22.04.5 (as support for open source, we also provide Debian 12 Arm64 offline software packages).

Using the Pigsty open source version allows junior development/operations engineers to have 70%+ of the capabilities of professional DBAs. Even without database experts, they can easily set up a highly available, high-performance, easy-to-maintain, secure and reliable PostgreSQL database cluster.

CodeOS Distribution Versionx86_64Arm64PG17PG16PG15PG14PG13
EL9RHEL 9 / Rocky9 / Alma9el9.x86_64
U22Ubuntu 22.04 (jammy)u22.x86_64
D12Debian 12 (bookworm)d12.x86_64d12.aarch64

= Primary support, = Optional support


Pigsty Professional Edition (PRO)

Pigsty Professional Edition subscription provides complete functional modules and warranty for Pigsty itself. For defects in PostgreSQL itself and extension plugins, we will make our best efforts to provide feedback and fixes through the PostgreSQL global developer community.

Pigsty Professional Edition is built on the open source version, fully compatible with all features of the open source version, and provides additional functional modules and broader database/operating system version compatibility options: we will provide build options for all minor versions of five mainstream operating system distributions.

Pigsty Professional Edition includes support for the latest two PostgreSQL major versions (18, 17), providing all available extension plugins in both major versions, ensuring you can smoothly migrate to the latest PostgreSQL major version through rolling upgrades.

Pigsty Professional Edition subscription allows you to use China mainland mirror site software repositories, accessible without VPN/proxy; we will also customize offline software installation packages for your exact operating system major/minor version, ensuring normal installation and delivery in air-gapped environments, achieving autonomous and controllable deployment.

Pigsty Professional Edition subscription provides standard expert consulting services, including complex issue analysis, DBA Q&A support, backup compliance advice, etc. We commit to responding to your issues within business hours (5x8), and provide 1 person-day support per year, with optional person-day add-on options.

Pigsty Professional Edition uses a commercial license, providing additional modules, technical support, and warranty services.

Pigsty Professional Edition starting price is ¥150,000 / year, equivalent to the annual fee for 9 vCPU AWS high-availability RDS PostgreSQL, or a junior operations engineer with a monthly salary of 10,000 yuan.

CodeOS Distribution Versionx86_64Arm64PG17PG16PG15PG14PG13
EL9RHEL 9 / Rocky9 / Alma9el9.x86_64el9.aarch64
EL8RHEL 8 / Rocky8 / Alma8 / Anolis8el8.x86_64el8.aarch64
U24Ubuntu 24.04 (noble)u24.x86_64u24.aarch64
U22Ubuntu 22.04 (jammy)u22.x86_64u22.aarch64
D12Debian 12 (bookworm)d12.x86_64d12.aarch64

Pigsty Enterprise Edition

Pigsty Enterprise Edition subscription includes all service content provided by the Pigsty Professional Edition subscription, plus the following value-added service items:

Pigsty Enterprise Edition subscription provides the broadest range of database/operating system version support, including extended support for EOL operating systems (EL7, U20, D11), domestic operating systems, cloud vendor operating systems, and EOL database major versions (from PG 13 onwards), as well as full support for Arm64 architecture chips.

Pigsty Enterprise Edition subscription provides 信创 (domestic innovation) and localization solutions, allowing you to use PolarDB v2.0 (this kernel license needs to be purchased separately) kernel to replace the native PostgreSQL kernel to meet domestic compliance requirements.

Pigsty Enterprise Edition subscription provides higher-standard enterprise-level consulting services, committing to 7x24 with (< 1h) response time SLA, and can provide more types of consulting support: version upgrades, performance bottleneck identification, annual architecture review, extension plugin integration, etc.

Pigsty Enterprise Edition subscription includes 2 person-days of support per year, with optional person-day add-on options, for resolving more complex and time-consuming issues.

Pigsty Enterprise Edition allows you to use Pigsty for DBaaS purposes, building cloud database services for external sales.

Pigsty Enterprise Edition starting price is ¥400,000 / year, equivalent to the annual fee for 24 vCPU AWS high-availability RDS, or an operations expert with a monthly salary of 30,000 yuan.

CodeOS Distribution Versionx86_64PG17PG16PG15PG14PG13PG12Arm64PG17PG16PG15PG14PG13PG12
EL9RHEL 9 / Rocky9 / Alma9el9.x86_64el9.arm64
EL8RHEL 8 / Rocky8 / Alma8 / Anolis8el8.x86_64el8.arm64
U24Ubuntu 24.04 (noble)u24.x86_64u24.arm64
U22Ubuntu 22.04 (jammy)u22.x86_64u22.arm64
D12Debian 12 (bookworm)d12.x86_64d12.arm64
D11Debian 11 (bullseye)d12.x86_64d11.arm64
U20Ubuntu 20.04 (focal)d12.x86_64u20.arm64
EL7RHEL7 / CentOS7 / UOS …d12.x86_64el7.arm64

Pigsty Subscription Notes

Feature Differences

Pigsty Professional/Enterprise Edition includes the following additional features compared to the open source version:

  • Command Line Management Tool: Unlock the full functionality of the Pigsty command line tool (pig)
  • System Customization Capability: Provide pre-built offline installation packages for exact mainstream Linux operating system distribution major/minor versions
  • Offline Installation Capability: Complete Pigsty installation in environments without Internet access (air-gapped environments)
  • Multi-version PG Kernel: Allow users to freely specify and install PostgreSQL major versions within the lifecycle (13 - 17)
  • Kernel Replacement Capability: Allow users to use other PostgreSQL-compatible kernels to replace the native PG kernel, and the ability to install these kernels offline
    • Babelfish: Provides Microsoft SQL Server wire protocol-level compatibility
    • IvorySQL: Based on PG, provides Oracle syntax/type/stored procedure compatibility
    • PolarDB PG: Provides support for open-source PolarDB for PostgreSQL kernel
    • PolarDB O: 信创 database, Oracle-compatible kernel that meets domestic compliance requirements (Enterprise Edition subscription only)
  • Extension Support Capability: Provides out-of-the-box installation for 440 available PG Extensions for PG 13-18 on mainstream operating systems.
  • Complete Functional Modules: Provides all functional modules:
    • Supabase: Reliably self-host production-grade open-source Firebase
    • MinIO: Enterprise PB-level object storage planning and self-hosting
    • DuckDB: Provides comprehensive DuckDB support, and PostgreSQL + DuckDB OLAP extension plugin support
    • Kafka: Provides high-availability Kafka cluster deployment and monitoring
    • Kubernetes, VictoriaMetrics & VictoriaLogs
  • Domestic Operating System Support: Provides domestic 信创 operating system support options (Enterprise Edition subscription only)
  • Domestic ARM Architecture Support: Provides domestic ARM64 architecture support options (Enterprise Edition subscription only)
  • China Mainland Mirror Repository: Smooth installation without VPN, providing domestic YUM/APT repository mirrors and DockerHub access proxy.
  • Chinese Interface Support: Monitoring system Chinese interface support (Beta)

Payment Model

Pigsty subscription uses an annual payment model. After signing the contract, the one-year validity period is calculated from the contract date. If payment is made before the subscription contract expires, it is considered automatic renewal. Consecutive subscriptions have discounts. The first renewal (second year) enjoys a 95% discount, the second and subsequent renewals enjoy a 90% discount on subscription fees, and one-time subscriptions for three years or more enjoy an overall 85% discount.

After the annual subscription contract terminates, you can choose not to renew the subscription service. Pigsty will no longer provide software updates, technical support, and consulting services, but you can continue to use the already installed version of Pigsty Professional Edition software. If you subscribed to Pigsty professional services and choose not to renew, when re-subscribing you do not need to make up for the subscription fees during the interruption period, but all discounts and benefits will be reset.

Pigsty’s pricing strategy ensures value for money - you can immediately get top DBA’s database architecture construction solutions and management best practices, with their consulting support and comprehensive coverage; while the cost is highly competitive compared to hiring database experts full-time or using cloud databases. Here are market references for enterprise-level database professional service pricing:

The fair price for decent database professional services is 10,000 ~ 20,000 yuan / year, with the billing unit being vCPU, i.e., one CPU thread (1 Intel core = 2 vCPU threads). Pigsty provides top-tier PostgreSQL expert services in China and adopts a per-node billing model. On commonly seen high-core-count server nodes, it brings users an unparalleled cost reduction and efficiency improvement experience.


Pigsty Expert Services

In addition to Pigsty subscription, Pigsty also provides on-demand Pigsty x PostgreSQL expert services - industry-leading database experts available for consultation.


Contact Information

Please send an email to [email protected]. Users in mainland China are welcome to add WeChat ID RuohangFeng.

2.11 - FAQ

Answers to frequently asked questions about the Pigsty project itself.

What is Pigsty, and what is it not?

Pigsty is a PostgreSQL database distribution, a local-first open-source RDS cloud database solution. Pigsty is not a Database Management System (DBMS), but rather a tool, distribution, solution, and best practice for managing DBMS.

Analogy: The database is the car, then the DBA is the driver, RDS is the taxi service, and Pigsty is the autonomous driving software.


What problem does Pigsty solve?

The ability to use databases well is extremely scarce: either hire database experts at high cost to self-build (hire drivers), or rent RDS from cloud vendors at sky-high prices (hail a taxi), but now you have a new option: Pigsty (autonomous driving). Pigsty helps users use databases well: allowing users to self-build higher-quality and more efficient local cloud database services at less than 1/10 the cost of RDS, without a DBA!


Who are Pigsty’s target users?

Pigsty has two typical target user groups. The foundation is medium to large companies building ultra-large-scale enterprise/production-grade PostgreSQL RDS / DBaaS services. Through extreme customizability, Pigsty can meet the most demanding database management needs and provide enterprise-level support and service guarantees.

At the same time, Pigsty also provides “out-of-the-box” PG RDS self-building solutions for individual developers, small and medium enterprises lacking DBA capabilities, and the open-source community.


Why can Pigsty help you use databases well?

Pigsty embodies the experience and best practices of top experts refined in the most complex and largest-scale client PostgreSQL scenarios, productized into replicable software: Solving extension installation, high availability, connection pooling, monitoring, backup and recovery, parameter optimization, IaC batch management, one-click installation, automated operations, and many other issues at once. Avoiding many pitfalls in advance and preventing repeated mistakes.


Why is Pigsty better than RDS?

Pigsty provides a feature set and infrastructure support far beyond RDS, including 440 extension plugins and 8+ kernel support. Pigsty provides a unique professional-grade monitoring system in the PG ecosystem, along with architectural best practices battle-tested in complex scenarios, simple and easy to use.

Moreover, forged in top-tier client scenarios like Tantan, Apple, and Alibaba, continuously nurtured with passion and love, its depth and maturity are incomparable to RDS’s one-size-fits-all approach.


Why is Pigsty cheaper than RDS?

Pigsty allows you to use 10 ¥/core·month pure hardware resources to run 400¥-1400¥/core·month RDS cloud databases, and save the DBA’s salary. Typically, the total cost of ownership (TCO) of a large-scale Pigsty deployment can be over 90% lower than RDS.

Pigsty can simultaneously reduce software licensing/services/labor costs. Self-building requires no additional staff, allowing you to spend costs where it matters most.


How does Pigsty help developers?

Pigsty integrates the most comprehensive extensions in the PG ecosystem (440), providing an All-in-PG solution: a single component replacing specialized components like Redis, Kafka, MySQL, ES, vector databases, OLAP / big data analytics.

Greatly improving R&D efficiency and agility while reducing complexity costs, and developers can achieve self-service management and autonomous DevOps with Pigsty’s support, without needing a DBA.


How does Pigsty help operations?

Pigsty’s self-healing high-availability architecture ensures hardware failures don’t need immediate handling, letting ops and DBAs sleep well; monitoring aids problem analysis and performance optimization; IaC enables automated management of ultra-large-scale clusters.

Operations can moonlight as DBAs with Pigsty’s support, while DBAs can skip the system building phase, saving significant work hours and focusing on high-value work, or relaxing, learning PG.


Who is the author of Pigsty?

Pigsty is primarily developed by Feng Ruohang alone, an open-source contributor, database expert, and evangelist who has focused on PostgreSQL for 10 years, formerly at Alibaba, Tantan, and Apple, a full-stack expert. Now the founder of a one-person company, providing professional consulting services.

He is also a tech KOL, the founder of the top WeChat database personal account “非法加冯” (Illegally Add Feng), with 60,000+ followers across all platforms.


What is Pigsty’s ecosystem position and influence?

Pigsty is the most influential Chinese open-source project in the global PostgreSQL ecosystem, with about 100,000 users, half from overseas. Pigsty is also one of the most active open-source projects in the PostgreSQL ecosystem, currently dominating in extension distribution and monitoring systems.

PGEXT.Cloud is a PostgreSQL extension repository maintained by Pigsty, with the world’s largest PostgreSQL extension distribution volume. It has become an upstream software supply chain for multiple international PostgreSQL vendors.

Pigsty is currently one of the major distributions in the PostgreSQL ecosystem and a challenger to cloud vendor RDS, now widely used in defense, government, healthcare, internet, finance, manufacturing, and other industries.


What scale of customers is Pigsty suitable for?

Pigsty originated from the need for ultra-large-scale PostgreSQL automated management but has been deeply optimized for ease of use. Individual developers and small-medium enterprises lacking professional DBA capabilities can also easily get started.

The largest deployment is 25K vCPU, 4.5 million QPS, 6+ years; the smallest deployment can run completely on a 1c1g VM for Demo / Devbox use.


What capabilities does Pigsty provide?

Pigsty focuses on integrating the PostgreSQL ecosystem and providing PostgreSQL best practices, but also supports a series of open-source software that works well with PostgreSQL. For example:

  • Etcd, Redis, MinIO, DuckDB, Prometheus
  • FerretDB, Babelfish, IvorySQL, PolarDB, OrioleDB
  • OpenHalo, Supabase, Greenplum, Dify, Odoo, …

What scenarios is Pigsty suitable for?

  • Running large-scale PostgreSQL clusters for business
  • Self-building RDS, object storage, cache, data warehouse, Supabase, …
  • Self-building enterprise applications like Odoo, Dify, Wiki, GitLab
  • Running monitoring infrastructure, monitoring existing databases and hosts
  • Using multiple PG extensions in combination
  • Dashboard development and interactive data application demos, data visualization, web building

Is Pigsty open source and free?

Pigsty is 100% open-source software + free software. Under the premise of complying with the open-source license, you can use it freely and for various commercial purposes.

We value software freedom. Pigsty uses the Apache-2.0 license. Please see the license for details.


Does Pigsty provide commercial support?

Pigsty software itself is open-source and free, and provides commercial subscriptions for all budgets, providing quality assurance for Pigsty & PostgreSQL. Subscriptions provide broader OS/PG/chip architecture support ranges, as well as expert consulting and support. Pigsty commercial subscriptions deliver industry-leading management/technical experience/solutions, helping you save valuable time, shouldering risks for you, and providing a safety net for difficult problems.


Does Pigsty support domestic innovation (信创)?

Pigsty software itself is not a database and is not subject to domestic innovation catalog restrictions, and already has multiple military use cases. However, the Pigsty open-source edition does not provide any form of domestic innovation support. Commercial subscription provides domestic innovation solutions in cooperation with Alibaba Cloud, supporting the use of PolarDB-O with domestic innovation qualifications (requires separate purchase) as the RDS kernel, capable of running on domestic innovation OS/chip environments.


Can Pigsty run as a multi-tenant DBaaS?

Pigsty uses the Apache-2.0 license. You may use it for DBaaS purposes under the license terms. For explicit commercial authorization, consider the Pigsty Enterprise subscription.


Can Pigsty’s Logo be rebranded as your own product?

When redistributing Pigsty, you must retain copyright notices, patent notices, trademark notices, and attribution notices from the original work, and attach prominent change descriptions in modified files while preserving the content of the LICENSE file. Under these premises, you can replace PIGSTY’s Logo and trademark, but you must not promote it as “your own original work.” We provide commercial licensing support for OEM and rebranding in the enterprise edition.


Pigsty’s Business Entity

Pigsty is a project invested by Miracle Plus S22. The original entity Panji Cloud Data (Beijing) Technology Co., Ltd. has been liquidated and divested of the Pigsty business.

Pigsty is currently independently operated and maintained by author Feng Ruohang. The business entities are:

  • Hainan Zhuxia Cloud Data Co., Ltd. / 91460000MAE6L87B94
  • Haikou Longhua Piji Data Center / 92460000MAG0XJ569B
  • Haikou Longhua Yuehang Technology Center / 92460000MACCYGBQ1N

PIGSTY® and PGSTY® are registered trademarks of Haikou Longhua Yuehang Technology Center.

2.12 - Release Note

Pigsty historical version release notes

The current stable version is v4.0.0, released 2025-12-25.

VersionRelease DateSummaryRelease Page
v4.0.02025-01-28Observability revolution, security hardening, JUICE/VIBE modules, Apache-2.0v4.0.0
v3.7.02025-12-02PG18 default, 437 extensions, EL10 & Debian 13 support, PGEXT.CLOUDv3.7.0
v3.6.12025-08-15Routine PG minor updates, PGDG China mirror, EL10/D13 stubsv3.6.1
v3.6.02025-07-30pgactive, MinIO/ETCD improvements, simplified install, config cleanupv3.6.0
v3.5.02025-06-16PG18 beta, 421 extensions, monitoring upgrade, code refactorv3.5.0
v3.4.12025-04-05OpenHalo & OrioleDB, MySQL compatibility, pgAdmin improvementsv3.4.1
v3.4.02025-03-30Backup improvements, auto certs, AGE, IvorySQL all platformsv3.4.0
v3.3.02025-02-24404 extensions, extension directory, App playbook, Nginx customizationv3.3.0
v3.2.22025-01-23390 extensions, Omnigres, Mooncake, Citus 13 & PG17 supportv3.2.2
v3.2.12025-01-12350 extensions, Ivory4, Citus enhancements, Odoo templatev3.2.1
v3.2.02024-12-24Extension CLI, Grafana enhancements, ARM64 extension completionv3.2.0
v3.1.02024-11-24PG17 default, config simplification, Ubuntu24 & ARM supportv3.1.0
v3.0.42024-10-30PG17 extensions, OLAP suite, pg_duckdbv3.0.4
v3.0.32024-09-27PostgreSQL 17, Etcd improvements, IvorySQL 3.4, PostGIS 3.5v3.0.3
v3.0.22024-09-07Mini install mode, PolarDB 15 support, monitoring view updatesv3.0.2
v3.0.12024-08-31Routine bug fixes, Patroni 4 support, Oracle compatibility improvementsv3.0.1
v3.0.02024-08-25333 extensions, pluggable kernels, MSSQL/Oracle/PolarDB compatibilityv3.0.0
v2.7.02024-05-20Extension explosion, 20+ new powerful extensions, Docker appsv2.7.0
v2.6.02024-02-28PG16 as default, ParadeDB & DuckDB extensions introducedv2.6.0
v2.5.12023-12-01Routine minor update, PG16 key extension supportv2.5.1
v2.5.02023-09-24Ubuntu/Debian support: bullseye, bookworm, jammy, focalv2.5.0
v2.4.12023-09-24Supabase/PostgresML support with graphql, jwt, pg_net, vaultv2.4.1
v2.4.02023-09-14PG16, RDS monitoring, new extensions: FTS/graph/HTTP/embeddingv2.4.0
v2.3.12023-09-01PGVector with HNSW, PG16 RC1, doc refresh, Chinese docs, bug fixesv2.3.1
v2.3.02023-08-20Node VIP, FerretDB, NocoDB, MySQL stub, CVE fixesv2.3.0
v2.2.02023-08-04Dashboard & provisioning overhaul, UOS compatibilityv2.2.0
v2.1.02023-06-10PostgreSQL 12-16beta supportv2.1.0
v2.0.22023-03-31Added pgvector support, fixed MinIO CVEv2.0.2
v2.0.12023-03-21v2 bug fixes, security enhancements, Grafana upgradev2.0.1
v2.0.02023-02-28Major architecture upgrade, compatibility/security/maintainabilityv2.0.0
v1.5.12022-06-18Grafana security hotfixv1.5.1
v1.5.02022-05-31Docker application supportv1.5.0
v1.4.12022-04-20Bug fixes & full English documentation translationv1.4.1
v1.4.02022-03-31MatrixDB support, separated INFRA/NODES/PGSQL/REDIS modulesv1.4.0
v1.3.02021-11-30PGCAT overhaul & PGSQL enhancement & Redis beta supportv1.3.0
v1.2.02021-11-03Default PGSQL version upgraded to 14v1.2.0
v1.1.02021-10-12Homepage, JupyterLab, PGWEB, Pev2 & pgbadgerv1.1.0
v1.0.02021-07-26v1 GA, Monitoring System Overhaulv1.0.0
v0.9.02021-04-04Pigsty GUI, CLI, Logging Integrationv0.9.0
v0.8.02021-03-28Service Provisionv0.8.0
v0.7.02021-03-01Monitor only deploymentv0.7.0
v0.6.02021-02-19Architecture Enhancementv0.6.0
v0.5.02021-01-07Database Customize Templatev0.5.0
v0.4.02020-12-14PostgreSQL 13 Support, Official Documentationv0.4.0
v0.3.02020-10-22Provisioning Solution GAv0.3.0
v0.2.02020-07-10PGSQL Monitoring v6 GAv0.2.0
v0.1.02020-06-20Validation on Testing Environmentv0.1.0
v0.0.52020-08-19Offline Installation Modev0.0.5
v0.0.42020-07-27Refactor playbooks into Ansible rolesv0.0.4
v0.0.32020-06-22Interface enhancementv0.0.3
v0.0.22020-04-30First Commitv0.0.2
v0.0.12019-05-15POCv0.0.1

v4.0.1 (beta)

Highlights

  • EL default minor versions updated to EL 9.7 / EL 10.1.
  • Focused fixes for PGSQL / PGCAT Grafana dashboard usability: dynamic datasource $dsn, schema-level links, database age metrics, etc.
  • Added one-click Mattermost app template, with database, storage, portal, and optional PGFS/JuiceFS support.
  • Refactored infra-rm uninstall flow with segmented deregister cleanup for Victoria targets, Grafana datasources, and Vector log configs.
  • Tuned PostgreSQL default autovacuum thresholds to reduce frequent vacuum/analyze on small tables.
  • Fixed FD limit chain: added fs.nr_open=8M and unified service LimitNOFILE=8M to avoid startup failures caused by systemd/setrlimit.
  • Updated default Vibe experience: Jupyter disabled by default, Claude Code installed and managed via npm package.

Version Updates

  • pig v1.1.0: Agentic CLI
  • timescaledb 2.25.0
  • pg_search 0.20.10
  • pgmq 1.1.0
  • pg_track_optimizaer 0.9.1
  • pljs 1.0.5
  • pg_textsearch 0.5.0

API Changes

  • Corrected template guard for io_method / io_workers from pg_version >= 17 to pg_version >= 18.
  • Raised autovacuum_vacuum_threshold from 50 to 500 in oltp/crit/tiny, and to 1000 in olap.
  • Raised autovacuum_analyze_threshold from 50 to 250 in oltp/crit/tiny, and to 500 in olap.
  • Added fs.nr_open=8388608 in node tuned templates, and aligned fs.file-max / fs.nr_open / LimitNOFILE hierarchy.
  • Changed LimitNOFILE for postgres/patroni/minio from 16777216 to 8388608.
  • Added bin/validate checks for pg_databases[*].parameters and pg_hba_rules[*].order.
  • Added segmented tags in infra-rm.yml: deregister, config, env, etc.
  • Changed Vibe defaults: jupyter_enabled=false, and npm_packages now include @anthropic-ai/claude-code and happy-coder.
  • Added default Vibe environment variable: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1.

Compatibility Fixes

  • Fixed Redis replicaof empty-guard logic and systemd stop behavior.
  • Fixed schema/table/sequence qualification and identifier quoting in pg_migration scripts.
  • Fixed wrong restart target and variable usage in pgsql role handlers.
  • Fixed blackbox config filename cleanup item and pgAdmin pgpass file format.
  • Made pg_exporter startup non-blocking to avoid slowing the main flow on exporter failures.
  • Simplified VIP address parsing: when CIDR is omitted, default netmask is 24.
  • Increased MinIO health-check retries from 3 to 5.
  • Switched node hostname setup to Ansible hostname module instead of shell calls.
  • Fixed .env format for app/electric and app/pg_exporter to standard KEY=VALUE.
  • Fixed pg_crontab syntax error in pigsty.yml.
  • Updated ETCD docs to clarify default TLS vs optional mTLS semantics.

Commit List (v4.0.0..HEAD, 21 commits, 2026-02-02 ~ 2026-02-07)

c402f0e6d fix: correct io_method/io_workers version guard from PG17 to PG18
3bf676546 vibe: disable jupyter by default and install claude-code via npm_packages
613c4efa9 fix: set fs.nr_open in tuned profiles and reduce LimitNOFILE to 8M
07e499d4d new app conf template matter most
4cc68ed61 Refine infra removal playbook
7cfb98f69 fix: app docker .env file format
9b36b1875 Fix config templates and validation
318d85e6e Simplify VIP parsing and make pg_exporter non-blocking
571cd9e70 Use hostname module for nodename
de98f073c Fix blackbox config filename and pgpass format
4bff01100 Fix redis replicaof guard and systemd stop
38445b68d minio: increase health check retries
c99854969 docs(etcd): clarify TLS vs mTLS
41229124a fix pgsql roles typo
e575d17c6 fix pg_migration scripts to use fully qualified identifiers
ec4207202 fix pgsql-schema broken links
a237e6c99 tune autovacuum threshold to reduce small table vacuum frequency
e80754760 fix pgcat-database links to pgcat-table
0060f5346 fix pgsql-database / pgsql-databases age metric
43cdf72bc fix pigsty.yml typo
0d9db7b08 fix: update datasource to $dsn

Thanks

  • Thanks to @l2dy for many valuable suggestions and issues.

Checksums

This section summarizes commits since v4.0.0 (HEAD: c402f0e6d). No new release archives/checksums are published yet.

v4.0.0

curl https://pigsty.io/get | bash -s v4.0.0

318 commits, 604 files changed, +118,655 / -327,552 lines

Highlights

  • Observability Revolution: Prometheus → VictoriaMetrics (10x perf), Loki+Promtail → VictoriaLogs+Vector
  • Security Hardening: Auto-generated passwords, etcd RBAC, firewall/SELinux modes, permission tightening, Nginx Basic Auth
  • Docker Support: Run Pigsty in Docker containers with full systemd support (macOS & Linux)
  • New Module: JUICE - Mount PostgreSQL as filesystem with PITR recovery capability
  • New Module: VIBE - AI coding sandbox with Claude Code, JupyterLab, VS Code Server, Node.js
  • Database Management: pg_databases state (create/absent/recreate), instant clone with strategy
  • PITR & Fork: /pg/bin/pg-fork for instant CoW cloning, enhanced pg-pitr with pre-backup
  • HA Enhancement: pg_rto_plan with 4 RTO presets (fast/norm/safe/wide), pg_crontab scheduled tasks
  • Multi-Cloud Terraform: AWS, Azure, GCP, Hetzner, DigitalOcean, Linode, Vultr, TencentCloud templates
  • License Change: AGPL-3.0 → Apache-2.0

Infra Software Versions - MinIO now uses pgsty/minio fork RPM/DEB.

PackageVersionPackageVersion
victoria-metrics1.134.0victoria-logs1.43.1
vector0.52.0grafana12.3.1
alertmanager0.30.1etcd3.6.7
duckdb1.4.4pg_exporter1.1.2
pgbackrest_exporter0.22.0blackbox_exporter0.28.0
node_exporter1.10.2minio20251203
pig1.0.0claude2.1.19
opencode1.1.34uv0.9.26
asciinema3.1.0prometheus3.9.1
pushgateway1.11.2juicefs1.4.0
code-server4.100.2caddy2.10.2
hugo0.154.5cloudflared2026.1.1
headscale0.27.1

New Modules

  • JUICE Module: JuiceFS distributed filesystem using PostgreSQL as metadata engine, supports PITR recovery for filesystem. Multiple storage backends (PG large objects, MinIO, S3), multi-instance deployment with Prometheus metrics, new node-juice dashboard.
  • VIBE Module: AI coding sandbox with Code-Server (VS Code in browser), JupyterLab (interactive computing), Node.js (JavaScript runtime), Claude Code (AI coding assistant with OpenTelemetry observability). New claude-code dashboard for usage monitoring.

PostgreSQL Extension Updates

Major extensions add PG 18 support: age, citus, documentdb, pg_search, timescaledb, pg_bulkload, rum, etc.

New: pg_textsearch 0.4.0, pg_clickhouse 0.1.3, pg_ai_query 0.1.1, etcd_fdw, pg_ttl_index 0.1.0, pljs 1.0.4, pg_retry 1.0.0, pg_weighted_statistics 1.0.0, pg_enigma 0.5.0, pglinter 1.0.1, documentdb_extended_rum 0.109, mobilitydb_datagen 1.3.0

Updated: timescaledb 2.24.0, pg_search 0.21.4, citus 14.0.0, documentdb 0.109, age 1.7.0, pg_duckdb 1.1.1, vchord 1.0.0, vchord_bm25 0.3.0, pg_biscuit 2.2.2, pg_anon 2.5.1, wrappers 0.5.7, pg_vectorize 0.26.0, pg_session_jwt 0.4.0, pg_partman 5.4.0, pgmq 1.9.0, pg_bulkload 3.1.23, pg_timeseries 0.2.0, pg_convert 0.1.0, pgBackRest 2.58

Breaking Changes

BeforeAfter
PrometheusVictoriaMetrics
Loki + PromtailVictoriaLogs + Vector
node_disable_firewallnode_firewall_mode
node_disable_selinuxnode_selinux_mode
pg_pwd_encremoved (always scram-sha-256)
infra_pip_packagesnode_pip_packages
grafana_clean defaulttrue → false
install.ymlrenamed to deploy.yml

Observability

  • VictoriaMetrics replaces Prometheus — several times the performance with a fraction of the resources
  • VictoriaLogs + Vector replaces Promtail + Loki for log collection
  • Unified log format for all components, PG logs use UTC timestamp (log_timezone)
  • PostgreSQL log rotation changed to weekly truncated rotation mode
  • Added Vector parsing configs for Nginx/Syslog/PG CSV/Pgbackrest/Grafana/Redis/etcd/MinIO logs
  • Datasource registration now runs on all Infra nodes, Victoria datasources auto-registered in Grafana
  • New grafana_pgurl parameter for using PG as Grafana backend storage
  • New grafana_view_password parameter for Grafana Meta datasource password
  • pg_exporter updated to 1.1.2 with new pg_timeline collector and numerous fixes
  • New dashboards: node-vector, node-juice, claude-code

Interface Improvements

  • install.yml playbook renamed to deploy.yml, new vibe.yml playbook for VIBE module
  • pg_databases: added state field (create/absent/recreate), strategy for cloning, newer locale params support
  • pg_users: added admin parameter with ADMIN OPTION, set and inherit options
  • pg_hba: support order field for priority, IPv6 localhost access
  • New node_crontab auto-restores original crontab on node-rm

Parameter Optimization

  • pg_io_method: auto, sync, worker, io_uring options, default worker
  • pg_rto_plan: RTO presets (fast/norm/safe/wide) integrating Patroni & HAProxy config
  • pg_crontab: scheduled tasks for postgres dbsu
  • idle_replication_slot_timeout: default 7d, crit template 3d
  • file_copy_method: set to clone for PG18 instant database cloning
  • Crit template enables Patroni strict sync mode
  • PITR default archive_mode changed to preserve

Architecture Improvements

  • Fixed /infra symlink pointing to /data/infra on Infra nodes
  • Local repo at /data/nginx/pigsty, /www symlinks to /data/nginx
  • New scripts: /pg/bin/pg-fork (CoW cloning), /pg/bin/pg-drop-role, bin/pgsql-ext
  • Enhanced /pg/bin/pg-pitr for instance-level PITR with pre-backup
  • UV Python manager moved from infra to node module with node_uv_env parameter
  • Terraform templates: AWS, Azure, GCP, Hetzner, DigitalOcean, Linode, Vultr, TencentCloud
  • Simu template simplified from 36 to 20 nodes, new 10-node and Citus templates

Security Improvements

  • configure -g auto-generates strong random passwords
  • Replaced node_disable_firewall with node_firewall_mode (off/none/zone)
  • Replaced node_disable_selinux with node_selinux_mode (disabled/permissive/enforcing)
  • Nginx Basic Auth support for optional HTTP authentication
  • Enabled etcd RBAC, each cluster can only manage its own PG cluster
  • etcd root password stored in /etc/etcd/etcd.pass, admin-readable only
  • New node_admin_sudo parameter for admin sudo mode (all/nopass)
  • Fixed ownca certificate validity for Chrome recognition

Bug Fixes

  • Fixed ownca certificate validity for Chrome compatibility
  • Fixed Vector 0.52 syslog_raw parsing issue
  • Fixed pg_pitr multiple replica clonefrom timing issues
  • Fixed Ansible SELinux race condition in dnsmasq
  • Fixed EL9 aarch64 patroni & llvmjit issues
  • Fixed pgbouncer pid path (/run/postgresql)
  • Fixed HAProxy service template variable path
  • Fixed MinIO reload handler ineffective
  • Fixed vmetrics_port default value to 8428
  • Fixed pg-failover-callback for all Patroni callback events

New Parameters

ParameterTypeDefaultDescription
node_firewall_modeenumnoneFirewall mode: off/none/zone
node_selinux_modeenumpermissiveSELinux mode
node_admin_sudoenumnopassAdmin sudo privilege level
pg_io_methodenumworkerI/O method: auto/sync/worker/io_uring
pg_rto_plandict-RTO presets: fast/norm/safe/wide
pg_crontablist[]postgres dbsu scheduled tasks
grafana_view_passwordstringDBUser.ViewerGrafana Meta datasource password
juice_cachepath/data/juiceJuiceFS cache directory
juice_instancesdict{}JuiceFS instance definitions
vibe_datapath/fsVIBE workspace directory
code_enabledbooltrueEnable Code-Server
code_passwordstringVibe.CodingCode-Server password
jupyter_enabledbooltrueEnable JupyterLab
jupyter_passwordstringVibe.CodingJupyterLab access token
claude_enabledbooltrueEnable Claude Code configuration
nodejs_enabledbooltrueEnable Node.js installation
nodejs_registrystring''npm registry, auto china mirror
node_uv_envpath/data/venvNode UV venv path, empty to skip
node_pip_packagesstring''pip packages for UV venv

Removed Parameters: node_disable_firewall, node_disable_selinux, infra_pip_packages, pg_pwd_enc, pgbackrest_clean, code_home, jupyter_home

Checksums

bc48405075b3ec6a85fc2c99a1f77650  pigsty-v4.0.0.tgz
db9797c3c8ae21320b76a442c1135c7b  pigsty-pkg-v4.0.0.d12.aarch64.tgz
1eed26eee42066ca71b9aecbf2ca1237  pigsty-pkg-v4.0.0.d12.x86_64.tgz
03540e41f575d6c3a7c63d1d30276d49  pigsty-pkg-v4.0.0.d13.aarch64.tgz
36a6ee284c0dd6d9f7d823c44280b88f  pigsty-pkg-v4.0.0.d13.x86_64.tgz
f2b6ec49d02916944b74014505d05258  pigsty-pkg-v4.0.0.el10.aarch64.tgz
73f64c349366fe23c022f81fe305d6da  pigsty-pkg-v4.0.0.el10.x86_64.tgz
287f767fbb66a9aaca9f0f22e4f20491  pigsty-pkg-v4.0.0.el8.aarch64.tgz
c0886aab454bd86245f3869ef2ab4451  pigsty-pkg-v4.0.0.el8.x86_64.tgz
094ab31bcf4a3cedbd8091bc0f3ba44c  pigsty-pkg-v4.0.0.el9.aarch64.tgz
235ccba44891b6474a76a81750712544  pigsty-pkg-v4.0.0.el9.x86_64.tgz
f2791c96db4cc17a8a4008fc8d9ad310  pigsty-pkg-v4.0.0.u22.aarch64.tgz
3099c4453eef03b766d68e04b8d5e483  pigsty-pkg-v4.0.0.u22.x86_64.tgz
49a93c2158434f1adf0d9f5bcbbb1ca5  pigsty-pkg-v4.0.0.u24.aarch64.tgz
4acaa5aeb39c6e4e23d781d37318d49b  pigsty-pkg-v4.0.0.u24.x86_64.tgz

v3.7.0

Highlights

  • PostgreSQL 18 Deep Support: Now the default major PG version, with full extension readiness!
  • Expanded OS Support: Added EL10 and Debian 13, bringing the total supported operating systems to 14.
  • Extension Growth: The PostgreSQL extension library now includes 437 entries.
  • Ansible 2.19 Compatibility: Full support for Ansible 2.19 following its breaking changes.
  • Kernel Updates: Latest versions for Supabase, PolarDB, IvorySQL, and Percona kernels.
  • Optimized Tuning: Refined logic for default PG parameters to maximize resource utilization.
  • PGEXT.CLOUD: Dedicated extension website open-sourced under Apache-2.0 license

Version Updates

  • PostgreSQL 18.1, 17.7, 16.11, 15.15, 14.20, 13.23
  • Patroni 4.1.0
  • Pgbouncer 1.25.0
  • pg_exporter 1.0.3
  • pgbackrest 2.57.0
  • Supabase 2025-11
  • PolarDB 15.15.5.0
  • FerretDB 2.7.0
  • DuckDB 1.4.2
  • Etcd 3.6.6
  • pig 0.7.4

For detailed version changes, please refer to:

API Changes

  • Implemented a refined optimization strategy for parallel execution parameters. See Tuning Guide.
  • The citus extension is no longer installed by default in rich and full templates (PG 18 support pending).
  • Added duckdb extension stubs to PostgreSQL parameter templates.
  • Capped min_wal_size, max_wal_size, and max_slot_wal_keep_size at 200 GB, 2000 GB, and 3000 GB, respectively.
  • Capped temp_file_limit at 200 GB (2 TB for OLAP workloads).
  • Increased the default connection count for the connection pool.
  • Added prometheus_port (default: 9058) to avoid conflicts with the EL10 RHEL Web Console port.
  • Changed alertmanager_port default to 9059 to avoid potential conflicts with Kafka SSL ports.
  • Added a pg_pre subtask to pg_pkg: removes conflicting LLVM packages (bpftool, python3-perf) on EL9+ prior to PG installation.
  • Added the llvm module to the default repository definition for Debian/Ubuntu.
  • Fixed package removal logic in infra-rm.yml.

Compatibility Fixes

  • Ubuntu/Debian CA Trust: Fixed incorrect warning return codes when trusting Certificate Authorities.
  • Ansible 2.19 Support: Resolved numerous compatibility issues introduced by Ansible 2.19 to ensure stability across versions:
    • Added explicit int type casting for sequence variables.
    • Migrated with_items syntax to loop.
    • Nested key exchange variables in lists to prevent character iteration on strings in newer versions.
    • Explicitly cast range usage to list.
    • Renamed reserved variables such as name and port.
    • Replaced play_hosts with ansible_play_hosts.
    • Added string casting for specific variables to prevent runtime errors.
  • EL10 Adaptation:
    • Fixed missing ansible-collection-community-crypto preventing key generation.
    • Fixed missing ansible logic packages.
    • Removed modulemd_tools, flamegraph, and timescaledb-tool.
    • Replaced java-17-openjdk with java-21-openjdk.
    • Resolved aarch64 YUM repository naming issues.
  • Debian 13 Adaptation:
    • Replaced dnsutils with bind9-dnsutils.
  • Ubuntu 24 Fixes:
    • Temporarily removed tcpdump due to upstream dependency crashes.

Checksums

e00d0c2ac45e9eff1cc77927f9cd09df  pigsty-v3.7.0.tgz
987529769d85a3a01776caefefa93ecb  pigsty-pkg-v3.7.0.d12.aarch64.tgz
2d8272493784ae35abeac84568950623  pigsty-pkg-v3.7.0.d12.x86_64.tgz
090cc2531dcc25db3302f35cb3076dfa  pigsty-pkg-v3.7.0.d13.x86_64.tgz
ddc54a9c4a585da323c60736b8560f55  pigsty-pkg-v3.7.0.el10.aarch64.tgz
d376e75c490e8f326ea0f0fbb4a8fd9b  pigsty-pkg-v3.7.0.el10.x86_64.tgz
8c2deeba1e1d09ef3d46d77a99494e71  pigsty-pkg-v3.7.0.el8.aarch64.tgz
9795e059bd884b9d1b2208011abe43cd  pigsty-pkg-v3.7.0.el8.x86_64.tgz
08b860155d6764ae817ed25f2fcf9e5b  pigsty-pkg-v3.7.0.el9.aarch64.tgz
1ac430768e488a449d350ce245975baa  pigsty-pkg-v3.7.0.el9.x86_64.tgz
e033aaf23690755848db255904ab3bcd  pigsty-pkg-v3.7.0.u22.aarch64.tgz
cc022ea89181d89d271a9aaabca04165  pigsty-pkg-v3.7.0.u22.x86_64.tgz
0e978598796db3ce96caebd76c76e960  pigsty-pkg-v3.7.0.u24.aarch64.tgz
48223898ace8812cc4ea79cf3178476a  pigsty-pkg-v3.7.0.u24.x86_64.tgz

v3.6.1

curl https://repo.pigsty.io/get | bash -s v3.6.1

Highlights

  • PostgreSQL 17.6, 16.10, 15.14, 14.19, 13.22, and 18 Beta 3 Released!
  • PGDG APT/YUM mirror for Mainland China Users
  • New home website https://pgsty.com
  • Add el10, debian 13 stub, add el10 terraform images

Infra Package Updates

  • Grafana 12.1.0
  • pg_exporter 1.0.2
  • pig 0.6.1
  • vector 0.49.0
  • redis_exporter 1.75.0
  • mongo_exporter 0.47.0
  • victoriametrics 1.123.0
  • victorialogs: 1.28.0
  • grafana-victoriametrics-ds 0.18.3
  • grafana-victorialogs-ds 0.19.3
  • grafana-infinity-ds 3.4.1
  • etcd 3.6.4
  • ferretdb 2.5.0
  • tigerbeetle 0.16.54
  • genai-toolbox 0.12.0

Extension Package Updates

  • pg_search 0.17.3

API Changes

  • remove br_filter from default node_kernel_modules
  • do not use OS minor version dir for pgdg yum repos

Checksums

045977aff647acbfa77f0df32d863739  pigsty-pkg-v3.6.1.d12.aarch64.tgz
636b15c2d87830f2353680732e1af9d2  pigsty-pkg-v3.6.1.d12.x86_64.tgz
700a9f6d0db9c686d371bf1c05b54221  pigsty-pkg-v3.6.1.el8.aarch64.tgz
2aff03f911dd7be363ba38a392b71a16  pigsty-pkg-v3.6.1.el8.x86_64.tgz
ce07261b02b02b36a307dab83e460437  pigsty-pkg-v3.6.1.el9.aarch64.tgz
d598d62a47bbba2e811059a53fe3b2b5  pigsty-pkg-v3.6.1.el9.x86_64.tgz
13fd68752e59f5fd2a9217e5bcad0acd  pigsty-pkg-v3.6.1.u22.aarch64.tgz
c25ccfb98840c01eb7a6e18803de55bb  pigsty-pkg-v3.6.1.u22.x86_64.tgz
0d71e58feebe5299df75610607bf428c  pigsty-pkg-v3.6.1.u24.aarch64.tgz
4fbbab1f8465166f494110c5ec448937  pigsty-pkg-v3.6.1.u24.x86_64.tgz
083d8680fa48e9fec3c3fcf481d25d2f  pigsty-v3.6.1.tgz

v3.6.0

curl https://repo.pigsty.io/get | bash -s v3.6.0

Highlights

  • Brand-new documentation site: https://doc.pgsty.com
  • Added pgsql-pitr playbook and backup/restore tutorial, improved PITR experience
  • Added kernel support: Percona PG TDE (PG17)
  • Optimized self-hosted Supabase experience, updated to the latest version, and fixed issues with the official template
  • Simplified installation steps, online install by default, bootstrap now part of install script

Improvements

  • Refactored ETCD module with dedicated remove playbook and bin utils
  • Refactored MinIO module with plain HTTP mode, better bucket provisioning options.
  • Reorganized and streamlined all configuration templates for easier use
  • Faster Docker Registry mirror for users in mainland China
  • Optimized tuned OS parameter templates for modern hardware and NVMe disks
  • Added extension pgactive for multi-master replication and sub-second failover
  • Adjusted default values for pg_fs_main / pg_fs_backup, simplified file directory structure design

Bug Fixes

  • Fixed pgbouncer configuration file error by @housei-zzy
  • Fixed OrioleDB issues on Debian platform
  • Fixed tuned shm configuration parameter issue
  • Offline packages now use the PGDG source directly, avoiding out-of-sync mirror sites
  • Fix ivorysql libxcrypt dependencies issues
  • Fix Replace the slow and broken epel mirror
  • Fix haproxy_enabled flag not working

Infra Package Updates

Added Victoria Metrics / Victoria Logs related packages

  • genai-toolbox 0.9.0 (new)
  • victoriametrics 1.120.0 -> 1.121.0 (refactor)
  • vmutils 1.121.0 (rename from victoria-metrics-utils)
  • grafana-victoriametrics-ds 0.15.1 -> 0.17.0
  • victorialogs 1.24.0 -> 1.25.1 (refactor)
  • vslogcli 1.24.0 -> 1.25.1
  • vlagent 1.25.1 (new)
  • grafana-victorialogs-ds 0.16.3 -> 0.18.1
  • prometheus 3.4.1 -> 3.5.0
  • grafana 12.0.0 -> 12.0.2
  • vector 0.47.0 -> 0.48.0
  • grafana-infinity-ds 3.2.1 -> 3.3.0
  • keepalived_exporter 1.7.0
  • blackbox_exporter 0.26.0 -> 0.27.0
  • redis_exporter 1.72.1 -> 1.77.0
  • rclone 1.69.3 -> 1.70.3

Database Package Updates

  • PostgreSQL 18 Beta2 update
  • pg_exporter 1.0.1, updated to latest dependencies and provides Docker image
  • pig 0.6.0, updated extension and repository list, with pig install subcommand
  • vip-manager 3.0.0 -> 4.0.0
  • ferretdb 2.2.0 -> 2.3.1
  • dblab 0.32.0 -> 0.33.0
  • duckdb 1.3.1 -> 1.3.2
  • etcd 3.6.1 -> 3.6.3
  • ferretdb 2.2.0 -> 2.4.0
  • juicefs 1.2.3 -> 1.3.0
  • tigerbeetle 0.16.41 -> 0.16.50
  • pev2 1.15.0 -> 1.16.0

Extension Package Updates

  • OrioleDB 1.5 beta12
  • OriolePG 17.11
  • plv8 3.2.3 -> 3.2.4
  • postgresql_anonymizer 2.1.1 -> 2.3.0
  • pgvectorscale 0.7.1 -> 0.8.0
  • wrappers 0.5.0 -> 0.5.3
  • supautils 2.9.1 -> 2.10.0
  • citus 13.0.3 -> 13.1.0
  • timescaledb 2.20.0 -> 2.21.1
  • vchord 0.3.0 -> 0.4.3
  • pgactive 2.1.5 (new)
  • documentdb 0.103.0 -> 0.105.0
  • pg_search 0.17.0

API Changes

  • pg_fs_backup: Renamed to pg_fs_backup, default value /data/backups.
  • pg_rm_bkup: Renamed to pg_rm_backup, default value true.
  • pg_fs_main: Default value adjusted to /data/postgres.
  • nginx_cert_validity: New parameter to control Nginx self-signed certificate validity, default 397d.
  • minio_buckets: Default value adjusted to create three buckets named pgsql, meta, data.
  • minio_users: Removed dba user, added s3user_meta and s3user_data users for meta and data buckets respectively.
  • minio_https: New parameter to allow MinIO to use HTTP mode.
  • minio_provision: New parameter to allow skipping MinIO provisioning stage (skip bucket and user creation)
  • minio_safeguard: New parameter, abort minio-rm.yml when enabled
  • minio_rm_data: New parameter, whether to remove minio data directory during minio-rm.yml
  • minio_rm_pkg: New parameter, whether to uninstall minio package during minio-rm.yml
  • etcd_learner: New parameter to control whether to init etcd instance as learner
  • etcd_rm_data: New parameter, whether to remove etcd data directory during etcd-rm.yml
  • etcd_rm_pkg: New parameter, whether to uninstall etcd package during etcd-rm.yml

Checksums

ab91bc05c54b88c455bf66533c1d8d43  pigsty-v3.6.0.tgz
cea861e2b4ec7ff5318e1b3c30b470cb  pigsty-pkg-v3.6.0.d12.aarch64.tgz
2f253af87e19550057c0e7fca876d37c  pigsty-pkg-v3.6.0.d12.x86_64.tgz
0158145b9bbf0e4a120b8bfa8b44f857  pigsty-pkg-v3.6.0.el8.aarch64.tgz
07330d687d04d26e7d569c8755426c5a  pigsty-pkg-v3.6.0.el8.x86_64.tgz
311df5a342b39e3288ebb8d14d81e0d1  pigsty-pkg-v3.6.0.el9.aarch64.tgz
92aad54cc1822b06d3e04a870ae14e29  pigsty-pkg-v3.6.0.el9.x86_64.tgz
c4fadf1645c8bbe3e83d5a01497fa9ca  pigsty-pkg-v3.6.0.u22.aarch64.tgz
5477ed6be96f156a43acd740df8a9b9b  pigsty-pkg-v3.6.0.u22.x86_64.tgz
196169afc1be02f93fcc599d42d005ca  pigsty-pkg-v3.6.0.u24.aarch64.tgz
dbe5c1e8a242a62fe6f6e1f6e6b6c281  pigsty-pkg-v3.6.0.u24.x86_64.tgz

v3.5.0

Highlights

  • New website: https://pgsty.com
  • PostgreSQL 18 (Beta) support: monitoring via pg_exporter 1.0.0, installer alias via pig 0.4.2, and a pg18 template
  • 421 bundled extensions, now including OrioleDB and OpenHalo kernels on all platforms
  • pig do CLI replaces legacy bin/ scripts
  • Hardening for self-hosted Supabase (replication lag, key distribution, etc.)
  • Code & architecture refactor — slimmer tasks, cleaner defaults for Postgres & PgBouncer
  • Monitoring stack refresh — Grafana 12, pg_exporter 1.0, new panels & plugins
  • Run vagrant on Apple Silicon
curl https://repo.pigsty.io/get | bash -s v3.5.0

Module Changes

  • Add PostgreSQL 18 support
  • PG18 metrics support with pg_exporter 1.0.0+
  • PG18 install support with pig 0.4.1+
  • New config template pg18.yml
  • Refactored pgsql module
  • Split monitoring into a new pg_monitor role; removed clean logic
  • Pruned duplicate tasks, dropped dir/utils block, renamed templates (no .j2)
  • All extensions install in extensions schema (Supabase best-practice)
  • Added SET search_path='' to every monitoring function
  • Tuned PgBouncer defaults (larger pool, cleanup query); new pgbouncer_ignore_param
  • New pg_key task to generate pgsodium master keys
  • Enabled sync_replication_slots by default on PG 17
  • Retagged subtasks for clearer structure
  • Refactored pg_remove module
  • New flags pg_rm_data, pg_rm_bkup, pg_rm_pkg control what gets wiped
  • Clearer role layout & tagging
  • Added new pg_monitor module
  • pgbouncer_exporter no longer shares configuration files with pg_exporter
  • Added monitoring metrics for TimescaleDB and Citus
  • Using pg_exporter 0.9.0 with updated replication slot metrics for PG16/17
  • Using more compact, newly designed collector configuration files
  • Supabase Enhancement (thanks @lawso017 for the contribution)
  • update supabase containers and schemas to the latest version
  • Support pgsodium server key loading
  • fix logflare lag issue with supa-kick crontab
  • add set search_path clause for monitor functions
  • Added new pig do command to CLI, allowing command-line tool to replace Shell scripts in bin/

Infra Package Updates

  • pig 0.4.2
  • duckdb 1.3.0
  • etcd 3.6.0
  • vector 0.47.0
  • minio 20250422221226
  • mcli 20250416181326
  • pev 1.5.0
  • rclone 1.69.3
  • mtail 3.0.8 (new)

Observability Package Updates

  • grafana 12.0.0
  • grafana-victorialogs-ds 0.16.3
  • grafana-victoriametrics-ds 0.15.1
  • grafana-infinity-ds 3.2.1
  • grafana_plugins 12.0.0
  • prometheus 3.4.0
  • pushgateway 1.11.1
  • nginx_exporter 1.4.2
  • pg_exporter 1.0.0
  • pgbackrest_exporter 0.20.0
  • redis_exporter 1.72.1
  • keepalived_exporter 1.6.2
  • victoriametrics 1.117.1
  • victoria_logs 1.22.2

Database Package Updates

  • PostgreSQL 17.5, 16.9, 15.13, 14.18, 13.21
  • PostgreSQL 18beta1 support
  • pgbouncer 1.24.1
  • pgbackrest 2.55
  • pgbadger 13.1

Extension Package Updates

  • spat 0.1.0a4 new extension
  • pgsentinel 1.1.0 new extension
  • pgdd 0.6.0 (pgrx 0.14.1) new extension add back
  • convert 0.0.4 (pgrx 0.14.1) new extension
  • pg_tokenizer.rs 0.1.0 (pgrx 0.13.1)
  • pg_render 0.1.2 (pgrx 0.12.8)
  • pgx_ulid 0.2.0 (pgrx 0.12.7)
  • pg_idkit 0.3.0 (pgrx 0.14.1)
  • pg_ivm 1.11.0
  • orioledb 1.4.0 beta11 rpm & add debian/ubuntu support
  • openhalo 14.10 add debian/ubuntu support
  • omnigres 20250507 (miss on d12/u22)
  • citus 12.0.3
  • timescaledb 2.20.0 (DROP PG14 support)
  • supautils 2.9.2
  • pg_envvar 1.0.1
  • pgcollection 1.0.0
  • aggs_for_vecs 1.4.0
  • pg_tracing 0.1.3
  • pgmq 1.5.1
  • tzf-pg 0.2.0 (pgrx 0.14.1)
  • pg_search 0.15.18 (pgrx 0.14.1)
  • anon 2.1.1 (pgrx 0.14.1)
  • pg_parquet 0.4.0 (0.14.1)
  • pg_cardano 1.0.5 (pgrx 0.12) -> 0.14.1
  • pglite_fusion 0.0.5 (pgrx 0.12.8) -> 14.1
  • vchord_bm25 0.2.1 (pgrx 0.13.1)
  • vchord 0.3.0 (pgrx 0.13.1)
  • pg_vectorize 0.22.1 (pgrx 0.13.1)
  • wrappers 0.4.6 (pgrx 0.12.9)
  • timescaledb-toolkit 1.21.0 (pgrx 0.12.9)
  • pgvectorscale 0.7.1 (pgrx 0.12.9)
  • pg_session_jwt 0.3.1 (pgrx 0.12.6) -> 0.12.9
  • pg_timetable 5.13.0
  • ferretdb 2.2.0
  • documentdb 0.103.0 (+aarch64 support)
  • pgml 2.10.0 (pgrx 0.12.9)
  • sqlite_fdw 2.5.0 (fix pg17 deb)
  • tzf 0.2.2 0.14.1 (rename src)
  • pg_vectorize 0.22.2 (pgrx 0.13.1)
  • wrappers 0.5.0 (pgrx 0.12.9)

Checksums

c7e5ce252ddf848e5f034173e0f29345  pigsty-v3.5.0.tgz
ba31f311a16d615c1ee1083dc5a53566  pigsty-pkg-v3.5.0.d12.aarch64.tgz
3aa5c56c8f0de53303c7100f2b3934f4  pigsty-pkg-v3.5.0.d12.x86_64.tgz
a098cb33822633357e6880eee51affd6  pigsty-pkg-v3.5.0.el8.x86_64.tgz
63723b0aeb4d6c02fff0da2c78e4de31  pigsty-pkg-v3.5.0.el9.aarch64.tgz
eb91c8921d7b8a135d8330c77468bfe7  pigsty-pkg-v3.5.0.el9.x86_64.tgz
87ff25e14dfb9001fe02f1dfbe70ae9e  pigsty-pkg-v3.5.0.u22.x86_64.tgz
18be503856f6b39a59efbd1d0a8556b6  pigsty-pkg-v3.5.0.u24.aarch64.tgz
2bbef6a18cfa99af9cd175ef0adf873c  pigsty-pkg-v3.5.0.u24.x86_64.tgz

v3.4.1

GitHub Release Page: v3.4.1

  • Added support for MySQL wire-compatible PostgreSQL kernel on EL systems: openHalo
  • Added support for OLTP-enhanced PostgreSQL kernel on EL systems: orioledb
  • Optimized pgAdmin 9.2 application template with automatic server list updates and pgpass password population
  • Increased PG default max connections to 250, 500, 1000
  • Removed the mysql_fdw extension with dependency errors from EL8

Infra Updates

  • pig 0.3.4
  • etcd 3.5.21
  • restic 0.18.0
  • ferretdb 2.1.0
  • tigerbeetle 0.16.34
  • pg_exporter 0.8.1
  • node_exporter 1.9.1
  • grafana 11.6.0
  • zfs_exporter 3.8.1
  • mongodb_exporter 0.44.0
  • victoriametrics 1.114.0
  • minio 20250403145628
  • mcli 20250403170756

Extension Update

  • Bump pg_search to 0.15.13
  • Bump citus to 13.0.3
  • Bump timescaledb to 2.19.1
  • Bump pgcollection RPM to 1.0.0
  • Bump pg_vectorize RPM to 0.22.1
  • Bump pglite_fusion RPM to 0.0.4
  • Bump aggs_for_vecs RPM to 1.4.0
  • Bump pg_tracing RPM to 0.1.3
  • Bump pgmq RPM to 1.5.1

Checksums

471c82e5f050510bd3cc04d61f098560  pigsty-v3.4.1.tgz
4ce17cc1b549cf8bd22686646b1c33d2  pigsty-pkg-v3.4.1.d12.aarch64.tgz
c80391c6f93c9f4cad8079698e910972  pigsty-pkg-v3.4.1.d12.x86_64.tgz
811bf89d1087512a4f8801242ca8bed5  pigsty-pkg-v3.4.1.el9.x86_64.tgz
9fe2e6482b14a3e60863eeae64a78945  pigsty-pkg-v3.4.1.u22.x86_64.tgz

v3.4.0

GitHub Release Page: v3.4.0

Introduction Blog: Pigsty v3.4 MySQL Compatibility and Overall Enhancements

New Features

  • Added new pgBackRest backup monitoring metrics and dashboards
  • Enhanced Nginx server configuration options, with support for automated Certbot issuance
  • Now prioritizing PostgreSQL’s built-in C/C.UTF-8 locale settings
  • IvorySQL 4.4 is now fully supported across all platforms (RPM/DEB on x86/ARM)
  • Added new software packages: Juicefs, Restic, TimescaleDB EventStreamer
  • The Apache AGE graph database extension now fully supports PostgreSQL 13–17 on EL
  • Improved the app.yml playbook: launch standard Docker app without extra config
  • Bump Supabase, Dify, and Odoo app templates, bump to their latest versions
  • Add electric app template, local-first PostgreSQL Sync Engine

Infra Packages

  • +restic 0.17.3
  • +juicefs 1.2.3
  • +timescaledb-event-streamer 0.12.0
  • Prometheus 3.2.1
  • AlertManager 0.28.1
  • blackbox_exporter 0.26.0
  • node_exporter 1.9.0
  • mysqld_exporter 0.17.2
  • kafka_exporter 1.9.0
  • redis_exporter 1.69.0
  • pgbackrest_exporter 0.19.0-2
  • DuckDB 1.2.1
  • etcd 3.5.20
  • FerretDB 2.0.0
  • tigerbeetle 0.16.31
  • vector 0.45.0
  • VictoriaMetrics 1.113.0
  • VictoriaLogs 1.17.0
  • rclone 1.69.1
  • pev2 1.14.0
  • grafana-victorialogs-ds 0.16.0
  • grafana-victoriametrics-ds 0.14.0
  • grafana-infinity-ds 3.0.0

PostgreSQL Related

  • Patroni 4.0.5
  • PolarDB 15.12.3.0-e1e6d85b
  • IvorySQL 4.4
  • pgbackrest 2.54.2
  • pev2 1.14
  • WiltonDB 13.17

PostgreSQL Extensions

  • pgspider_ext 1.3.0 (new extension)
  • apache age 13–17 el rpm (1.5.0)
  • timescaledb 2.18.2 → 2.19.0
  • citus 13.0.1 → 13.0.2
  • documentdb 1.101-0 → 1.102-0
  • pg_analytics 0.3.4 → 0.3.7
  • pg_search 0.15.2 → 0.15.8
  • pg_ivm 1.9 → 1.10
  • emaj 4.4.0 → 4.6.0
  • pgsql_tweaks 0.10.0 → 0.11.0
  • pgvectorscale 0.4.0 → 0.6.0 (pgrx 0.12.5)
  • pg_session_jwt 0.1.2 → 0.2.0 (pgrx 0.12.6)
  • wrappers 0.4.4 → 0.4.5 (pgrx 0.12.9)
  • pg_parquet 0.2.0 → 0.3.1 (pgrx 0.13.1)
  • vchord 0.2.1 → 0.2.2 (pgrx 0.13.1)
  • pg_tle 1.2.0 → 1.5.0
  • supautils 2.5.0 → 2.6.0
  • sslutils 1.3 → 1.4
  • pg_profile 4.7 → 4.8
  • pg_snakeoil 1.3 → 1.4
  • pg_jsonschema 0.3.2 → 0.3.3
  • pg_incremental 1.1.1 → 1.2.0
  • pg_stat_monitor 2.1.0 → 2.1.1
  • ddl_historization 0.7 → 0.0.7 (bug fix)
  • pg_sqlog 3.1.7 → 1.6 (bug fix)
  • pg_random removed development suffix (bug fix)
  • asn1oid 1.5 → 1.6
  • table_log 0.6.1 → 0.6.4

Interface Changes

  • Added new Docker parameters: docker_data and docker_storage_driver (#521 by @waitingsong)
  • Added new Infra parameter: alertmanager_port, which lets you specify the AlertManager port
  • Added new Infra parameter: certbot_sign, apply for cert during nginx init? (false by default)
  • Added new Infra parameter: certbot_email, specifying the email used when requesting certificates via Certbot
  • Added new Infra parameter: certbot_options, specifying additional parameters for Certbot
  • Updated IvorySQL to place its default binary under /usr/ivory-4 starting in IvorySQL 4.4
  • Changed the default for pg_lc_ctype and other locale-related parameters from en_US.UTF-8 to C
  • For PostgreSQL 17, if using UTF8 encoding with C or C.UTF-8 locales, PostgreSQL’s built-in localization rules now take priority
  • configure automatically detects whether C.utf8 is supported by both the PG version and the environment, and adjusts locale-related options accordingly
  • Set the default IvorySQL binary path to /usr/ivory-4
  • Updated the default value of pg_packages to pgsql-main patroni pgbouncer pgbackrest pg_exporter pgbadger vip-manager
  • Updated the default value of repo_packages to [node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules]
  • Removed LANG and LC_ALL environment variable settings from /etc/profile.d/node.sh
  • Now using bento/rockylinux-8 and bento/rockylinux-9 as the Vagrant box images for EL
  • Added a new alias, extra_modules, which includes additional optional modules
  • Updated PostgreSQL aliases: postgresql, pgsql-main, pgsql-core, pgsql-full
  • GitLab repositories are now included among available modules
  • The Docker module has been merged into the Infra module
  • The node.yml playbook now includes a node_pip task to configure a pip mirror on each node
  • The pgsql.yml playbook now includes a pgbackrest_exporter task for collecting backup metrics
  • The Makefile now allows the use of META/PKG environment variables
  • Added /pg/spool directory as temporary storage for pgBackRest
  • Disabled pgBackRest’s link-all option by default
  • Enabled block-level incremental backups for MinIO repositories by default

Bug Fixes

  • Fixed the exit status code in pg-backup (#532 by @waitingsong)
  • In pg-tune-hugepage, restricted PostgreSQL to use only large pages (#527 by @waitingsong)
  • Fixed logic errors in the pg-role task
  • Corrected type conversion for hugepage configuration parameters
  • Fixed default value issues for node_repo_modules in the slim template

Checksums

768bea3bfc5d492f4c033cb019a81d3a  pigsty-v3.4.0.tgz
7c3d47ef488a9c7961ca6579dc9543d6  pigsty-pkg-v3.4.0.d12.aarch64.tgz
b5d76aefb1e1caa7890b3a37f6a14ea5  pigsty-pkg-v3.4.0.d12.x86_64.tgz
42dacf2f544ca9a02148aeea91f3153a  pigsty-pkg-v3.4.0.el8.aarch64.tgz
d0a694f6cd6a7f2111b0971a60c49ad0  pigsty-pkg-v3.4.0.el8.x86_64.tgz
7caa82254c1b0750e89f78a54bf065f8  pigsty-pkg-v3.4.0.el9.aarch64.tgz
8f817e5fad708b20ee217eb2e12b99cb  pigsty-pkg-v3.4.0.el9.x86_64.tgz
8b2fcaa6ef6fd8d2726f6eafbb488aaf  pigsty-pkg-v3.4.0.u22.aarch64.tgz
83291db7871557566ab6524beb792636  pigsty-pkg-v3.4.0.u22.x86_64.tgz
c927238f0343cde82a4a9ab230ecd2ac  pigsty-pkg-v3.4.0.u24.aarch64.tgz
14cbcb90693ed5de8116648a1f2c3e34  pigsty-pkg-v3.4.0.u24.x86_64.tgz

v3.3.0

  • Total available extensions increased to 404!
  • PostgreSQL February Minor Updates: 17.4, 16.8, 15.12, 14.17, 13.20
  • New Feature: app.yml script for auto-installing apps like Odoo, Supabase, Dify.
  • New Feature: Further Nginx configuration customization in infra_portal.
  • New Feature: Added Certbot support for quick free HTTPS certificate requests.
  • New Feature: Pure-text extension list now supported in pg_default_extensions.
  • New Feature: Default repositories now include mongo, redis, groonga, haproxy, etc.
  • New Parameter: node_aliases to add command aliases for Nodes.
  • Fix: Resolved default EPEL repo address issue in Bootstrap script.
  • Improvement: Added Aliyun mirror for Debian Security repository.
  • Improvement: pgBackRest backup support for IvorySQL kernel.
  • Improvement: ARM64 and Debian/Ubuntu support for PolarDB.
  • pg_exporter 0.8.0 now supports new metrics in pgbouncer 1.24.
  • New Feature: Auto-completion for common commands like git, docker, systemctl #506 #507 by @waitingsong.
  • Improvement: Refined ignore_startup_parameters in pgbouncer config template #488 by @waitingsong.
  • New homepage design: Pigsty’s website now features a fresh new look.
  • Extension Directory: Detailed information and download links for RPM/DEB binary packages.
  • Extension Build: pig CLI now auto-sets PostgreSQL extension build environment.

New Extensions

12 new PostgreSQL extensions added, bringing the total to 404 available extensions.

Bump Extension

  • citus 13.0.0 -> 13.0.1
  • pg_duckdb 0.2.0 -> 0.3.1
  • pg_mooncake 0.1.0 -> 0.1.2
  • timescaledb 2.17.2 -> 2.18.2
  • supautils 2.5.0 -> 2.6.0
  • supabase_vault 0.3.1 (become C)
  • VectorChord 0.1.0 -> 0.2.1
  • pg_bulkload 3.1.22 (+pg17)
  • pg_store_plan 1.8 (+pg17)
  • pg_search 0.14 -> 0.15.2
  • pg_analytics 0.3.0 -> 0.3.4
  • pgroonga 3.2.5 -> 4.0.0
  • zhparser 2.2 -> 2.3
  • pg_vectorize 0.20.0 -> 0.21.1
  • pg_net 0.14.0
  • pg_curl 2.4.2
  • table_version 1.10.3 -> 1.11.0
  • pg_duration 1.0.2
  • pg_graphql 1.5.9 -> 1.5.11
  • vchord 0.1.1 -> 0.2.1 ((+13))
  • vchord_bm25 0.1.0 -> 0.1.1
  • pg_mooncake 0.1.1 -> 0.1.2
  • pgddl 0.29
  • pgsql_tweaks 0.11.0

Infra Updates

  • pig 0.1.3 -> 0.3.0
  • pushgateway 1.10.0 -> 1.11.0
  • alertmanager 0.27.0 -> 0.28.0
  • nginx_exporter 1.4.0 -> 1.4.1
  • pgbackrest_exporter 0.18.0 -> 0.19.0
  • redis_exporter 1.66.0 -> 1.67.0
  • mongodb_exporter 0.43.0 -> 0.43.1
  • VictoriaMetrics 1.107.0 -> 1.111.0
  • VictoriaLogs v1.3.2 -> 1.9.1
  • DuckDB 1.1.3 -> 1.2.0
  • Etcd 3.5.17 -> 3.5.18
  • pg_timetable 5.10.0 -> 5.11.0
  • FerretDB 1.24.0 -> 2.0.0-rc
  • tigerbeetle 0.16.13 -> 0.16.27
  • grafana 11.4.0 -> 11.5.2
  • vector 0.43.1 -> 0.44.0
  • minio 20241218131544 -> 20250218162555
  • mcli 20241121172154 -> 20250215103616
  • rclone 1.68.2 -> 1.69.0
  • vray 5.23 -> 5.28

v3.2.2

What’s Changed

  • Bump IvorySQL to 4.2 (PostgreSQL 17.2)
  • Add Arm64 and Debian support for PolarDB kernel
  • Add certbot and certbot-nginx to default infra_packages
  • Increase pgbouncer max_prepared_statements to 256
  • remove pgxxx-citus package alias
  • hide pgxxx-olap category in pg_extensions by default

v3.2.1

Highlights

  • 351 PostgreSQL Extensions, including the powerful postgresql-anonymizer 2.0
  • IvorySQL 4.0 support for EL 8/9
  • Now use the Pigsty compiled Citus, TimescaleDB and pgroonga on all distros
  • Add self-hosting Odoo template and support

Bump software versions

  • pig CLI 0.1.2 self-updating capability
  • prometheus 3.1.0

Add New Extension

  • add pg_anon 2.0.0
  • add omnisketch 1.0.2
  • add ddsketch 1.0.1
  • add pg_duration 1.0.1
  • add ddl_historization 0.0.7
  • add data_historization 1.1.0
  • add schedoc 0.0.1
  • add floatfile 1.3.1
  • add pg_upless 0.0.3
  • add pg_task 1.0.0
  • add pg_readme 0.7.0
  • add vasco 0.1.0
  • add pg_xxhash 0.0.1

Update Extension

  • lower_quantile 1.0.3
  • quantile 1.1.8
  • sequential_uuids 1.0.3
  • pgmq 1.5.0 (subdir)
  • floatvec 1.1.1
  • pg_parquet 0.2.0
  • wrappers 0.4.4
  • pg_later 0.3.0
  • topn fix for deb.arm64
  • add age 17 on debian
  • powa + pg17, 5.0.1
  • h3 + pg17
  • ogr_fdw + pg17
  • age + pg17 1.5 on debian
  • pgtap + pg17 1.3.3
  • repmgr
  • topn + pg17
  • pg_partman 5.2.4
  • credcheck 3.0
  • ogr_fdw 1.1.5
  • ddlx 0.29
  • postgis 3.5.1
  • tdigest 1.4.3
  • pg_repack 1.5.2

v3.2.0

Highlights

  • New CLI: Introducing the pig command-line tool for managing extension plugins.
  • ARM64 Support: 390 extensions are now available for ARM64 across five major distributions.
  • Supabase Update: Latest Supabase Release Week updates are now supported for self-hosting on all distributions.
  • Grafana v11.4: Upgraded Grafana to version 11.4, featuring a new Infinity datasource.

Package Changes

  • New Extensions
  • Added timescaledb, timescaledb-loader, timescaledb-toolkit, and timescaledb-tool to the PIGSTY repository.
  • Added a custom-compiled pg_timescaledb for EL.
  • Added pgroonga, custom-compiled for all EL variants.
  • Added vchord 0.1.0.
  • Added pg_bestmatch.rs 0.0.1.
  • Added pglite_fusion 0.0.3.
  • Added pgpdf 0.1.0.
  • Updated Extensions
  • pgvectorscale: 0.4.0 → 0.5.1
  • pg_parquet: 0.1.0 → 0.1.1
  • pg_polyline: 0.0.1
  • pg_cardano: 1.0.2 → 1.0.3
  • pg_vectorize: 0.20.0
  • pg_duckdb: 0.1.0 → 0.2.0
  • pg_search: 0.13.0 → 0.13.1
  • aggs_for_vecs: 1.3.1 → 1.3.2
  • Infrastructure
  • Added promscale 0.17.0
  • Added grafana-plugins 11.4
  • Added grafana-infinity-plugins
  • Added grafana-victoriametrics-ds
  • Added grafana-victorialogs-ds
  • vip-manager: 2.8.0 → 3.0.0
  • vector: 0.42.0 → 0.43.0
  • grafana: 11.3 → 11.4
  • prometheus: 3.0.0 → 3.0.1 (package name changed from prometheus2 to prometheus)
  • nginx_exporter: 1.3.0 → 1.4.0
  • mongodb_exporter: 0.41.2 → 0.43.0
  • VictoriaMetrics: 1.106.1 → 1.107.0
  • VictoriaLogs: 1.0.0 → 1.3.2
  • pg_timetable: 5.9.0 → 5.10.0
  • tigerbeetle: 0.16.13 → 0.16.17
  • pg_export: 0.7.0 → 0.7.1
  • New Docker App
  • Add mattermost the open-source Slack alternative self-hosting template
  • Bug Fixes
  • Added python3-cdiff for el8.aarch64 to fix missing Patroni dependency.
  • Added timescaledb-tools for el9.aarch64 to fix missing package in official repo.
  • Added pg_filedump for el9.aarch64 to fix missing package in official repo.
  • Removed Extensions
  • pg_mooncake: Removed due to conflicts with pg_duckdb.
  • pg_top: Removed because of repeated version issues and quality concerns.
  • hunspell_pt_pt: Removed because of conflict with official PG dictionary files.
  • pgml: Disabled by default (no longer downloaded or installed).

API Changes

  • repo_url_packages now defaults to an empty array; packages are installed via OS package managers.
  • grafana_plugin_cache is deprecated; Grafana plugins are now installed via OS package managers.
  • grafana_plugin_list is deprecated for the same reason.
  • The 36-node “production” template has been renamed to simu.
  • Auto-generated code under node_id/vars now includes aarch64 support.
  • infra_packages now includes the pig CLI tool.
  • The configure command now updates the version numbers of pgsql-xxx aliases in auto-generated config files.
  • Update terraform templates with Makefile shortcuts and better provision experience

Bug Fix

Checksums

c42da231067f25104b71a065b4a50e68  pigsty-pkg-v3.2.0.d12.aarch64.tgz
ebb818f98f058f932b57d093d310f5c2  pigsty-pkg-v3.2.0.d12.x86_64.tgz
d2b85676235c9b9f2f8a0ad96c5b15fd  pigsty-pkg-v3.2.0.el9.aarch64.tgz
649f79e1d94ec1845931c73f663ae545  pigsty-pkg-v3.2.0.el9.x86_64.tgz
24c0be1d8436f3c64627c12f82665a17  pigsty-pkg-v3.2.0.u22.aarch64.tgz
0b9be0e137661e440cd4f171226d321d  pigsty-pkg-v3.2.0.u22.x86_64.tgz
8fdc6a60820909b0a2464b0e2b90a3a6  pigsty-v3.2.0.tgz

v3.1.0

2024-11-24 : ARM64 & Ubuntu24, PG17 by Default, Better Supabase & MinIO

https://github.com/pgsty/pigsty/releases/tag/v3.1.0


v3.0.4

2024-10-28 : PostgreSQL 17 Extensions, Better self-hosting Supabase

https://github.com/pgsty/pigsty/releases/tag/v3.0.4


v3.0.3

2024-09-27 : PostgreSQL 17, Etcd Enhancement, IvorySQL 3.4, PostGIS 3.5

https://github.com/pgsty/pigsty/releases/tag/v3.0.3


v3.0.2

2024-09-07 : Mini Install, PolarDB 15, Bloat View Update

https://github.com/pgsty/pigsty/releases/tag/v3.0.2


v3.0.1

2024-08-31 : Oracle Compatibility, Patroni 4.0, Routine Bug Fix

https://github.com/pgsty/pigsty/releases/tag/v3.0.1


v3.0.0

2024-08-30 : Extension Exploding & Pluggable Kernels (MSSQL, Oracle)

https://github.com/pgsty/pigsty/releases/tag/v3.0.0


v2.7.0

2024-05-16 : Extension Overwhelming, new docker apps

https://github.com/pgsty/pigsty/releases/tag/v2.7.0


v2.6.0

2024-02-29 : PG 16 as default version, ParadeDB & DuckDB

https://github.com/pgsty/pigsty/releases/tag/v2.6.0


v2.5.1

2023-12-01 : Routine update, pg16 major extensions

https://github.com/pgsty/pigsty/releases/tag/v2.5.1


v2.5.0

2023-10-24 : Ubuntu/Debian Support: bullseye, bookworm, jammy, focal

https://github.com/pgsty/pigsty/releases/tag/v2.5.0


v2.4.1

2023-09-24 : Supabase/PostgresML support, graphql, jwt, pg_net, vault

https://github.com/pgsty/pigsty/releases/tag/v2.4.1


v2.4.0

2023-09-14 : PG16, RDS Monitor, New Extensions

https://github.com/pgsty/pigsty/releases/tag/v2.4.0


v2.3.1

2023-09-01 : PGVector with HNSW, PG16 RC1, Chinese Docs, Bug Fix

https://github.com/pgsty/pigsty/releases/tag/v2.3.1


v2.3.0

2023-08-20 : PGSQL/REDIS Update, NODE VIP, Mongo/FerretDB, MYSQL Stub

https://github.com/pgsty/pigsty/releases/tag/v2.3.0


v2.2.0

2023-08-04 : Dashboard & Provision overhaul, UOS compatibility

https://github.com/pgsty/pigsty/releases/tag/v2.2.0


v2.1.0

2023-06-10 : PostgreSQL 12 ~ 16beta support

https://github.com/pgsty/pigsty/releases/tag/v2.1.0


v2.0.2

2023-03-31 : Add pgvector support and fix MinIO CVE

https://github.com/pgsty/pigsty/releases/tag/v2.0.2


v2.0.1

2023-03-21 : v2 Bug Fix, security enhance and bump grafana version

https://github.com/pgsty/pigsty/releases/tag/v2.0.1


v2.0.0

2023-02-28 : Compatibility Security Maintainability Enhancement

https://github.com/pgsty/pigsty/releases/tag/v2.0.0


v1.5.1

2022-06-18 : Grafana Security Hotfix

https://github.com/pgsty/pigsty/releases/tag/v1.5.1


v1.5.0

2022-05-31 : Docker Applications

https://github.com/pgsty/pigsty/releases/tag/v1.5.0


v1.4.1

2022-04-20 : Bug fix & Full translation of English documents.

https://github.com/pgsty/pigsty/releases/tag/v1.4.1


v1.4.0

2022-03-31 : MatrixDB Support, Separated INFRA, NODES, PGSQL, REDIS

https://github.com/pgsty/pigsty/releases/tag/v1.4.0


v1.3.0

2021-11-30 : PGCAT Overhaul & PGSQL Enhancement & Redis Support Beta

https://github.com/pgsty/pigsty/releases/tag/v1.3.0


v1.2.0

2021-11-03 : Upgrade default Postgres to 14, monitoring existing pg

https://github.com/pgsty/pigsty/releases/tag/v1.2.0


v1.1.0

2021-10-12 : HomePage, JupyterLab, PGWEB, Pev2 & Pgbadger

https://github.com/pgsty/pigsty/releases/tag/v1.1.0


v1.0.0

2021-07-26 : v1 GA, Monitoring System Overhaul

https://github.com/pgsty/pigsty/releases/tag/v1.0.0


v0.9.0

2021-04-04 : Pigsty GUI, CLI, Logging Integration

https://github.com/pgsty/pigsty/releases/tag/v0.9.0


v0.8.0

2021-03-28 : Service Provision

https://github.com/pgsty/pigsty/releases/tag/v0.8.0


v0.7.0

2021-03-01 : Monitor only deployment

https://github.com/pgsty/pigsty/releases/tag/v0.7.0


v0.6.0

2021-02-19 : Architecture Enhancement

https://github.com/pgsty/pigsty/releases/tag/v0.6.0


v0.5.0

2021-01-07 : Database Customize Template

https://github.com/pgsty/pigsty/releases/tag/v0.5.0


v0.4.0

2020-12-14 : PostgreSQL 13 Support, Official Documentation

https://github.com/pgsty/pigsty/releases/tag/v0.4.0


v0.3.0

2020-10-22 : Provisioning Solution GA

https://github.com/pgsty/pigsty/releases/tag/v0.3.0


v0.2.0

2020-07-10 : PGSQL Monitoring v6 GA

https://github.com/pgsty/pigsty/commit/385e33a62a19817e8ba19997260e6b77d99fe2ba


v0.1.0

2020-06-20 : Validation on Testing Environment

https://github.com/pgsty/pigsty/commit/1cf2ea5ee91db071de00ec805032928ff582453b


v0.0.5

2020-08-19 : Offline Installation Mode

https://github.com/pgsty/pigsty/commit/0fe9e829b298fe5e56307de3f78c95071de28245


v0.0.4

2020-07-27 : Refactor playbooks into ansible roles

https://github.com/pgsty/pigsty/commit/90b44259818d2c71e37df5250fe8ed1078a883d0


v0.0.3

2020-06-22 : Interface enhancement

https://github.com/pgsty/pigsty/commit/4c5c68ccd57bc32a9e9c98aa3f264aa19f45c7ee


v0.0.2

2020-04-30 : First Commit

https://github.com/pgsty/pigsty/commit/dd646775624ddb33aef7884f4f030682bdc371f8


v0.0.1

2019-05-15 : POC

https://github.com/Vonng/pg/commit/fa2ade31f8e81093eeba9d966c20120054f0646b


2.13 - Comparison

This article compares Pigsty with similar products and projects, highlighting feature differences.

Comparison with RDS

Pigsty is a local-first RDS alternative released under Apache-2.0, deployable on your own physical/virtual machines or cloud servers.

We’ve chosen Amazon AWS RDS for PostgreSQL (the global market leader) and Alibaba Cloud RDS for PostgreSQL (China’s market leader) as benchmarks for comparison.

Both Aliyun RDS and AWS RDS are closed-source cloud database services, available only through rental models on public clouds. The following comparison is based on the latest PostgreSQL 16 as of February 2024.


Feature Comparison

FeaturePigstyAliyun RDSAWS RDS
Major Version Support13 - 1813 - 1813 - 18
Read Replicas Supports unlimited read replicas Standby instances not exposed to users Standby instances not exposed to users
Read/Write Splitting Port-based traffic separation Separate paid component Separate paid component
Fast/Slow Separation Supports offline ETL instances Not available Not available
Cross-Region DR Supports standby clusters Multi-AZ deployment supported Multi-AZ deployment supported
Delayed Replicas Supports delayed instances Not available Not available
Load Balancing HAProxy / LVS Separate paid component Separate paid component
Connection Pool Pgbouncer Separate paid component: RDS Separate paid component: RDS Proxy
High Availability Patroni / etcd Requires HA edition Requires HA edition
Point-in-Time Recovery pgBackRest / MinIO Backup supported Backup supported
Metrics Monitoring Prometheus / Exporter Free basic / Paid advanced Free basic / Paid advanced
Log Collection Loki / Promtail Basic support Basic support
Visualization Grafana / Echarts Basic monitoring Basic monitoring
Alert Aggregation AlertManager Basic support Basic support

Key Extensions

Here are some important extensions compared based on PostgreSQL 16, as of 2024-02-28

ExtensionPigsty RDS / PGDG Official RepoAliyun RDSAWS RDS
Install Extensions Free to install Not allowed Not allowed
Geospatial PostGIS 3.4.2 PostGIS 3.3.4 / Ganos 6.1 PostGIS 3.4.1
Point Cloud PG PointCloud 1.2.5 Ganos PointCloud 6.1
Vector Embedding PGVector 0.6.1 / Svector 0.5.6 pase 0.0.1 PGVector 0.6
Machine Learning PostgresML 2.8.1
Time Series TimescaleDB 2.14.2
Horizontal Scaling Citus 12.1
Columnar Storage Hydra 1.1.1
Full Text Search pg_bm25 0.5.6
Graph Database Apache AGE 1.5.0
GraphQL PG GraphQL 1.5.0
OLAP pg_analytics 0.5.6
Message Queue pgq 3.5.0
DuckDB duckdb_fdw 1.1
Fuzzy Tokenization zhparser 1.1 / pg_bigm 1.2 zhparser 1.0 / pg_jieba pg_bigm 1.2
CDC Extraction wal2json 2.5.3 wal2json 2.5
Bloat Management pg_repack 1.5.0 pg_repack 1.4.8 pg_repack 1.5.0
AWS RDS PG Available Extensions

AWS RDS for PostgreSQL 16 available extensions (excluding PG built-in extensions)

namepg16pg15pg14pg13pg12pg11pg10
amcheck1.31.31.31.21.2yes1
auto_explainyesyesyesyesyesyesyes
autoinc1111nullnullnull
bloom1111111
bool_plperl1111nullnullnull
btree_gin1.31.31.31.31.31.31.2
btree_gist1.71.71.61.51.51.51.5
citext1.61.61.61.61.61.51.4
cube1.51.51.51.41.41.41.2
dblink1.21.21.21.21.21.21.2
dict_int1111111
dict_xsyn1111111
earthdistance1.11.11.11.11.11.11.1
fuzzystrmatch1.21.11.11.11.11.11.1
hstore1.81.81.81.71.61.51.4
hstore_plperl1111111
insert_username1111nullnullnull
intagg1.11.11.11.11.11.11.1
intarray1.51.51.51.31.21.21.2
isn1.21.21.21.21.21.21.1
jsonb_plperl11111nullnull
lo1.11.11.11.11.11.11.1
ltree1.21.21.21.21.11.11.1
moddatetime1111nullnullnull
old_snapshot111nullnullnullnull
pageinspect1.121.111.91.81.71.71.6
pg_buffercache1.41.31.31.31.31.31.3
pg_freespacemap1.21.21.21.21.21.21.2
pg_prewarm1.21.21.21.21.21.21.1
pg_stat_statements1.11.11.91.81.71.61.6
pg_trgm1.61.61.61.51.41.41.3
pg_visibility1.21.21.21.21.21.21.2
pg_walinspect1.11nullnullnullnullnull
pgcrypto1.31.31.31.31.31.31.3
pgrowlocks1.21.21.21.21.21.21.2
pgstattuple1.51.51.51.51.51.51.5
plperl1111111
plpgsql1111111
pltcl1111111
postgres_fdw1.11.11.11111
refint1111nullnullnull
seg1.41.41.41.31.31.31.1
sslinfo1.21.21.21.21.21.21.2
tablefunc1111111
tcn1111111
tsm_system_rows1111111.1
tsm_system_time1111111.1
unaccent1.11.11.11.11.11.11.1
uuid-ossp1.11.11.11.11.11.11.1
Aliyun RDS PG Available Extensions

Aliyun RDS for PostgreSQL 16 available extensions (excluding PG built-in extensions)

namepg16pg15pg14pg13pg12pg11pg10description
bloom1111111Provides a bloom filter-based index access method.
btree_gin1.31.31.31.31.31.31.2Provides GIN operator class examples that implement B-tree equivalent behavior for multiple data types and all enum types.
btree_gist1.71.71.61.51.51.51.5Provides GiST operator class examples that implement B-tree equivalent behavior for multiple data types and all enum types.
citext1.61.61.61.61.61.51.4Provides a case-insensitive string type.
cube1.51.51.51.41.41.41.2Provides a data type for representing multi-dimensional cubes.
dblink1.21.21.21.21.21.21.2Cross-database table operations.
dict_int1111111Additional full-text search dictionary template example.
earthdistance1.11.11.11.11.11.11.1Provides two different methods to calculate great circle distances on the Earth’s surface.
fuzzystrmatch1.21.11.11.11.11.11.1Determines similarities and distances between strings.
hstore1.81.81.81.71.61.51.4Stores key-value pairs in a single PostgreSQL value.
intagg1.11.11.11.11.11.11.1Provides an integer aggregator and an enumerator.
intarray1.51.51.51.31.21.21.2Provides some useful functions and operators for manipulating null-free integer arrays.
isn1.21.21.21.21.21.21.1Validates input according to a hard-coded prefix list, also used for concatenating numbers during output.
ltree1.21.21.21.21.11.11.1For representing labels of data stored in a hierarchical tree structure.
pg_buffercache1.41.31.31.31.31.31.3Provides a way to examine the shared buffer cache in real time.
pg_freespacemap1.21.21.21.21.21.21.2Examines the free space map (FSM).
pg_prewarm1.21.21.21.21.21.21.1Provides a convenient way to load data into the OS buffer or PostgreSQL buffer.
pg_stat_statements1.11.11.91.81.71.61.6Provides a means of tracking execution statistics of all SQL statements executed by a server.
pg_trgm1.61.61.61.51.41.41.3Provides functions and operators for alphanumeric text similarity, and index operator classes that support fast searching of similar strings.
pgcrypto1.31.31.31.31.31.31.3Provides cryptographic functions for PostgreSQL.
pgrowlocks1.21.21.21.21.21.21.2Provides a function to show row locking information for a specified table.
pgstattuple1.51.51.51.51.51.51.5Provides multiple functions to obtain tuple-level statistics.
plperl1111111Provides Perl procedural language.
plpgsql1111111Provides SQL procedural language.
pltcl1111111Provides Tcl procedural language.
postgres_fdw1.11.11.11111Cross-database table operations.
sslinfo1.21.21.21.21.21.21.2Provides information about the SSL certificate provided by the current client.
tablefunc1111111Contains multiple table-returning functions.
tsm_system_rows1111111Provides the table sampling method SYSTEM_ROWS.
tsm_system_time1111111Provides the table sampling method SYSTEM_TIME.
unaccent1.11.11.11.11.11.11.1A text search dictionary that can remove accents (diacritics) from lexemes.
uuid-ossp1.11.11.11.11.11.11.1Provides functions to generate universally unique identifiers (UUIDs) using several standard algorithms.
xml21.11.11.11.11.11.11.1Provides XPath queries and XSLT functionality.

Performance Comparison

MetricPigstyAliyun RDSAWS RDS
Peak PerformancePGTPC on NVME SSD Benchmark sysbench oltp_rwRDS PG Performance Whitepaper sysbench oltp scenario QPS 4000 ~ 8000 per core
Storage Spec: Max Capacity32TB / NVME SSD32 TB / ESSD PL364 TB / io2 EBS Block Express
Storage Spec: Max IOPS4K Random Read: Max 3M, Random Write 2000~350K4K Random Read: Max 1M16K Random IOPS: 256K
Storage Spec: Max Latency4K Random Read: 75µs, Random Write: 15µs4K Random Read: 200µs500µs / Inferred as 16K random IO
Storage Spec: Max ReliabilityUBER < 1e-18, equivalent to 18 nines MTBF: 2M hours 5DWPD, 3 years continuousReliability 9 nines, equivalent to UBER 1e-9 Storage and Data ReliabilityDurability: 99.999%, 5 nines (0.001% annual failure rate) io2 specification
Storage Spec: Max Cost¥31.5/TB·month (5-year warranty amortized / 3.2T / Enterprise-grade / MLC)¥3200/TB·month (original ¥6400, monthly ¥4000) 50% off with 3-year prepaid¥1900/TB·month using max spec 65536GB / 256K IOPS best discount

Observability

Pigsty provides nearly 3000 monitoring metrics and 50+ monitoring dashboards, covering database monitoring, host monitoring, connection pool monitoring, load balancer monitoring, and more, providing users with an unparalleled observability experience.

Pigsty provides 638 PostgreSQL-related monitoring metrics, while AWS RDS only has 99, and Aliyun RDS has only single-digit metrics:

Additionally, some projects provide PostgreSQL monitoring capabilities, but are relatively simple:


Maintainability

MetricPigstyAliyun RDSAWS RDS
System UsabilitySimpleSimpleSimple
Configuration ManagementConfig files / CMDB based on Ansible InventoryCan use TerraformCan use Terraform
Change MethodIdempotent Playbooks based on Ansible PlaybookConsole click operationsConsole click operations
Parameter TuningAuto-adapts to node specs, Four preset templates: OLTP, OLAP, TINY, CRIT
Infra as CodeNatively supportedCan use TerraformCan use Terraform
Customizable ParametersPigsty Parameters 283 parameters
Service & SupportCommercial subscription support availableAfter-sales ticket supportAfter-sales ticket support
Air-gapped DeploymentOffline installation supportedN/AN/A
Database MigrationPlaybooks for zero-downtime migration from existing v10+ PG instances to Pigsty managed instances via logical replicationCloud migration assistance Aliyun RDS Data Sync

Cost

Based on experience, RDS unit cost is 5-15 times that of self-hosted for software and hardware resources, with a rent-to-own ratio typically around one month. For details, see Cost Analysis.

FactorMetricPigstyAliyun RDSAWS RDS
CostSoftware License/Service FeeFree, hardware ~¥20-40/core·month¥200-400/core·month¥400-1300/core·month
Support Service FeeService ~¥100/core·monthIncluded in RDS cost

Other On-Premises Database Management Software

Some software and vendors providing PostgreSQL management capabilities:

  • Aiven: Closed-source commercial cloud-hosted solution
  • Percona: Commercial consulting, simple PG distribution
  • ClusterControl: Commercial database management software

Other Kubernetes Operators

Pigsty refuses to use Kubernetes for managing databases in production, so there are ecological differences with these solutions.

  • PGO
  • StackGres
  • CloudNativePG
  • TemboOperator
  • PostgresOperator
  • PerconaOperator
  • Kubegres
  • KubeDB
  • KubeBlocks

For more information, see:

2.13.1 - Cost Reference

This article provides cost data to help you evaluate self-hosted Pigsty, cloud RDS costs, and typical DBA salaries.

Overview

EC2Core·MonthRDSCore·Month
DHH Self-Hosted Core-Month Price (192C 384G)25.32Junior Open Source DB DBA Reference Salary¥15K/person·month
IDC Self-Hosted (Dedicated Physical: 64C384G)19.53Mid-Level Open Source DB DBA Reference Salary¥30K/person·month
IDC Self-Hosted (Container, 500% Oversold)7Senior Open Source DB DBA Reference Salary¥60K/person·month
UCloud Elastic VM (8C16G, Oversold)25ORACLE Database License10000
Aliyun ECS 2x Memory (Dedicated, No Oversold)107Aliyun RDS PG 2x Memory (Dedicated)260
Aliyun ECS 4x Memory (Dedicated, No Oversold)138Aliyun RDS PG 4x Memory (Dedicated)320
Aliyun ECS 8x Memory (Dedicated, No Oversold)180Aliyun RDS PG 8x Memory (Dedicated)410
AWS C5D.METAL 96C 200G (Monthly No Prepaid)100AWS RDS PostgreSQL db.T2 (2x)440
AWS C5D.METAL 96C 200G (3-Year Prepaid)80AWS RDS PostgreSQL db.M5 (4x)611
AWS C7A.METAL 192C 384G (3-Year Prepaid)104.8AWS RDS PostgreSQL db.R6G (8x)786

RDS Cost Reference

Payment ModelPriceAnnualized (¥10K)
IDC Self-Hosted (Single Physical Machine)¥75K / 5 years1.5
IDC Self-Hosted (2-3 Machines for HA)¥150K / 5 years3.0 ~ 4.5
Aliyun RDS On-Demand¥87.36/hour76.5
Aliyun RDS Monthly (Baseline)¥42K / month50
Aliyun RDS Annual (85% off)¥425,095 / year42.5
Aliyun RDS 3-Year Prepaid (50% off)¥750,168 / 3 years25
AWS On-Demand$25,817 / month217
AWS 1-Year No Prepaid$22,827 / month191.7
AWS 3-Year Full Prepaid$120K + $17.5K/month175
AWS China/Ningxia On-Demand¥197,489 / month237
AWS China/Ningxia 1-Year No Prepaid¥143,176 / month171
AWS China/Ningxia 3-Year Full Prepaid¥647K + ¥116K/month160.6

Here’s a comparison of self-hosted vs cloud database costs:

MethodAnnualized (¥10K)
IDC Hosted Server 64C / 384G / 3.2TB NVME SSD 660K IOPS (2-3 Machines)3.0 ~ 4.5
Aliyun RDS PG HA Edition pg.x4m.8xlarge.2c, 64C / 256GB / 3.2TB ESSD PL325 ~ 50
AWS RDS PG HA Edition db.m5.16xlarge, 64C / 256GB / 3.2TB io1 x 80k IOPS160 ~ 217

ECS Cost Reference

Pure Compute Price Comparison (Excluding NVMe SSD / ESSD PL3)

Using Aliyun as an example, the monthly pure compute price is 5-7x the self-hosted baseline, while 5-year prepaid is 2x self-hosted

Payment ModelUnit Price (¥/Core·Month)Relative to StandardSelf-Hosted Premium Multiple
On-Demand (1.5x)¥ 202160 %9.2 ~ 11.2
Monthly (Standard)¥ 126100 %5.7 ~ 7.0
1-Year Prepaid (65% off)¥ 83.766 %3.8 ~ 4.7
2-Year Prepaid (55% off)¥ 70.656 %3.2 ~ 3.9
3-Year Prepaid (44% off)¥ 55.144 %2.5 ~ 3.1
4-Year Prepaid (35% off)¥ 4535 %2.0 ~ 2.5
5-Year Prepaid (30% off)¥ 38.530 %1.8 ~ 2.1
DHH @ 2023¥ 22.0
Tantan IDC Self-Hosted¥ 18.0

Equivalent Price Comparison Including NVMe SSD / ESSD PL3

Including common NVMe SSD specs, the monthly pure compute price is 11-14x the self-hosted baseline, while 5-year prepaid is about 9x.

Payment ModelUnit Price (¥/Core·Month)+ 40GB ESSD PL3Self-Hosted Premium Multiple
On-Demand (1.5x)¥ 202¥ 36214.3 ~ 18.6
Monthly (Standard)¥ 126¥ 28611.3 ~ 14.7
1-Year Prepaid (65% off)¥ 83.7¥ 2449.6 ~ 12.5
2-Year Prepaid (55% off)¥ 70.6¥ 2309.1 ~ 11.8
3-Year Prepaid (44% off)¥ 55.1¥ 2158.5 ~ 11.0
4-Year Prepaid (35% off)¥ 45¥ 2058.1 ~ 10.5
5-Year Prepaid (30% off)¥ 38.5¥ 1997.9 ~ 10.2
DHH @ 2023¥ 25.3
Tantan IDC Self-Hosted¥ 19.5

DHH Case: 192 cores with 12.8TB Gen4 SSD (1c:66); Tantan Case: 64 cores with 3.2T Gen3 MLC SSD (1c:50).

Cloud prices calculated at 40GB ESSD PL3 per core (1 core:4x RAM:40x disk).


EBS Cost Reference

Evaluation FactorLocal PCI-E NVME SSDAliyun ESSD PL3AWS io2 Block Express
Capacity32TB32 TB64 TB
IOPS4K Random Read: 600K ~ 1.1M, 4K Random Write: 200K ~ 350K4K Random Read: Max 1M16K Random IOPS: 256K
Latency4K Random Read: 75µs, 4K Random Write: 15µs4K Random Read: 200µsRandom IO: ~500µs (contextually inferred as 16K)
ReliabilityUBER < 1e-18, equivalent to 18 nines, MTBF: 2M hours, 5DWPD for 3 yearsData Reliability 9 nines Storage and Data ReliabilityDurability: 99.999%, 5 nines (0.001% annual failure rate) io2 Specification
Cost¥16/TB·month (5-year amortized / 3.2T MLC), 5-year warranty, ¥3000 retail¥3200/TB·month (original ¥6400, monthly ¥4000), 50% off with 3-year full prepaid¥1900/TB·month using max spec 65536GB 256K IOPS best discount
SLA5-year warranty, replacement on failureAliyun RDS SLA Availability 99.99%: 15% monthly fee, 99%: 30% monthly fee, 95%: 100% monthly feeAmazon RDS SLA Availability 99.95%: 15% monthly fee, 99%: 25% monthly fee, 95%: 100% monthly fee

S3 Cost Reference

Date$/GB·Month¥/TB·5YearsHDD ¥/TBSSD ¥/TB
2006.030.150630002800
2010.110.140588001680
2012.120.0953990042015400
2014.040.030126003719051
2016.120.02396602453766
2023.120.0239660105280
Other ReferencesHigh-Perf StorageTop-Tier Discountedvs Purchased NVMe SSDPrice Ref
S3 Express0.16067200DHH 12T1400
EBS io20.125 + IOPS114000Shannon 3.2T900

Cloud Exit Collection

There was a time when “moving to the cloud” was almost politically correct in tech circles, and an entire generation of app developers had their vision obscured by the cloud. Let’s use real data analysis and firsthand experience to explain the value and pitfalls of the public cloud rental model — for your reference in this era of cost reduction and efficiency improvement — please see “Cloud Computing Mudslide: Collection

Cloud Infrastructure Basics


Cloud Business Model


Cloud Exit Odyssey


Cloud Failure Post-Mortems


RDS Failures


Cloud Vendor Profiles

3 - Concepts

Understand Pigsty’s core concepts, architecture design, and principles. Master high availability, backup recovery, security compliance, and other key capabilities.

Pigsty is a portable, extensible open-source PostgreSQL distribution for building production-grade database services in local environments with declarative configuration and automation. It has a vast ecosystem providing a complete set of tools, scripts, and best practices to bring PostgreSQL to enterprise-grade RDS service levels.

Pigsty’s name comes from PostgreSQL In Great STYle, also understood as Postgres, Infras, Graphics, Service, Toolbox, it’s all Yours—a self-hosted PostgreSQL solution with graphical monitoring that’s all yours. You can find the source code on GitHub, visit the official documentation for more information, or experience the Web UI in the online demo.

pigsty-banner


Why Pigsty? What Can It Do?

PostgreSQL is a sufficiently perfect database kernel, but it needs more tools and systems to become a truly excellent database service. In production environments, you need to manage every aspect of your database: high availability, backup recovery, monitoring alerts, access control, parameter tuning, extension installation, connection pooling, load balancing…

Wouldn’t it be easier if all this complex operational work could be automated? This is precisely why Pigsty was created.

Pigsty provides:

  • Out-of-the-Box PostgreSQL Distribution

    Pigsty deeply integrates 440+ extensions from the PostgreSQL ecosystem, providing out-of-the-box distributed, time-series, geographic, spatial, graph, vector, search, and other multi-modal database capabilities. From kernel to RDS distribution, providing production-grade database services for versions 13-18 on EL/Debian/Ubuntu.

  • Self-Healing High Availability Architecture

    A high availability architecture built on Patroni, Etcd, and HAProxy enables automatic failover for hardware failures with seamless traffic handoff. Primary failure recovery time RTO < 45s, data recovery point RPO ≈ 0. You can perform rolling maintenance and upgrades on the entire cluster without application coordination.

  • Complete Point-in-Time Recovery Capability

    Based on pgBackRest and optional MinIO cluster, providing out-of-the-box PITR point-in-time recovery capability. Giving you the ability to quickly return to any point in time, protecting against software defects and accidental data deletion.

  • Flexible Service Access and Traffic Management

    Through HAProxy, Pgbouncer, and VIP, providing flexible service access patterns for read-write separation, connection pooling, and automatic routing. Delivering stable, reliable, auto-routing, transaction-pooled high-performance database services.

  • Stunning Observability

    A modern observability stack based on Prometheus and Grafana provides unparalleled monitoring best practices. Over three thousand types of monitoring metrics describe every aspect of the system, from global dashboards to CRUD operations on individual objects.

  • Declarative Configuration Management

    Following the Infrastructure as Code philosophy, using declarative configuration to describe the entire environment. You just tell Pigsty “what kind of database cluster you want” without worrying about how to implement it—the system automatically adjusts to the desired state.

  • Modular Architecture Design

    A modular architecture design that can be freely combined to suit different scenarios. Beyond the core PostgreSQL module, it also provides optional modules for Redis, MinIO, Etcd, FerretDB, and support for various PG-compatible kernels.

  • Solid Security Best Practices

    Industry-leading security best practices: self-signed CA certificate encryption, AES encrypted backups, scram-sha-256 encrypted passwords, out-of-the-box ACL model, HBA rule sets following the principle of least privilege, ensuring data security.

  • Simple and Easy Deployment

    All dependencies are pre-packaged for one-click installation in environments without internet access. Local sandbox environments can run on micro VMs with 1 core and 2GB RAM, providing functionality identical to production environments. Provides Vagrant-based local sandboxes and Terraform-based cloud deployments.


What Pigsty Is Not

Pigsty is not a traditional, all-encompassing PaaS (Platform as a Service) system.

  • Pigsty doesn’t provide basic hardware resources. It runs on nodes you provide, whether bare metal, VMs, or cloud instances, but it doesn’t create or manage these resources itself (though it provides Terraform templates to simplify cloud resource preparation).

  • Pigsty is not a container orchestration system. It runs directly on the operating system, not requiring Kubernetes or Docker as infrastructure. Of course, it can coexist with these systems and provides a Docker module for running stateless applications.

  • Pigsty is not a general database management tool. It focuses on PostgreSQL and its ecosystem. While it also supports peripheral components like Redis, Etcd, and MinIO, the core is always built around PostgreSQL.

  • Pigsty won’t lock you in. It’s built on open-source components, doesn’t modify the PostgreSQL kernel, and introduces no proprietary protocols. You can continue using your well-managed PostgreSQL clusters anytime without Pigsty.

Pigsty doesn’t restrict how you should or shouldn’t build your database services. For example:

  • Pigsty provides good parameter defaults and configuration templates, but you can override any parameter.
  • Pigsty provides a declarative API, but you can still use underlying tools (Ansible, Patroni, pgBackRest, etc.) for manual management.
  • Pigsty can manage the complete lifecycle, or you can use only its monitoring system to observe existing database instances or RDS.

Pigsty provides a different level of abstraction than the hardware layer—it works at the database service layer, focusing on how to deliver PostgreSQL at its best, rather than reinventing the wheel.


Evolution of PostgreSQL Deployment

To understand Pigsty’s value, let’s review the evolution of PostgreSQL deployment approaches.

Manual Deployment Era

In traditional deployment, DBAs needed to manually install and configure PostgreSQL, manually set up replication, manually configure monitoring, and manually handle failures. The problems with this approach are obvious:

  • Low efficiency: Each instance requires repeating many manual operations, prone to errors.
  • Lack of standardization: Databases configured by different DBAs can vary greatly, making maintenance difficult.
  • Poor reliability: Failure handling depends on manual intervention, with long recovery times and susceptibility to human error.
  • Weak observability: Lack of unified monitoring, making problem discovery and diagnosis difficult.

Managed Database Era

To solve these problems, cloud providers offer managed database services (RDS). Cloud RDS does solve some operational issues, but also brings new challenges:

  • High cost: Managed services typically charge multiples to dozens of times hardware cost as “service fees.”
  • Vendor lock-in: Migration is difficult, tied to specific cloud platforms.
  • Limited functionality: Cannot use certain advanced features, extensions are restricted, parameter tuning is limited.
  • Data sovereignty: Data stored in the cloud, reducing autonomy and control.

Local RDS Era

Pigsty represents a third approach: building database services in local environments that match or exceed cloud RDS.

Pigsty combines the advantages of both approaches:

  • High automation: One-click deployment, automatic configuration, self-healing failures—as convenient as cloud RDS.
  • Complete autonomy: Runs on your own infrastructure, data completely in your own hands.
  • Extremely low cost: Run enterprise-grade database services at near-pure-hardware costs.
  • Complete functionality: Unlimited use of PostgreSQL’s full capabilities and ecosystem extensions.
  • Open architecture: Based on open-source components, no vendor lock-in, free to migrate anytime.

This approach is particularly suitable for:

  • Private and hybrid clouds: Enterprises needing to run databases in local environments.
  • Cost-sensitive users: Organizations looking to reduce database TCO.
  • High-security scenarios: Critical data requiring complete autonomy and control.
  • PostgreSQL power users: Scenarios requiring advanced features and rich extensions.
  • Development and testing: Quickly setting up databases locally that match production environments.

What’s Next

Now that you understand Pigsty’s basic concepts, you can:

3.1 - Architecture

Pigsty’s modular architecture—declarative composition, on-demand customization, flexible deployment.

Pigsty uses a modular architecture with a declarative interface. You can freely combine modules like building blocks as needed.


Modules

Pigsty uses a modular design with six main default modules: PGSQL, INFRA, NODE, ETCD, REDIS, and MINIO.

  • PGSQL: Self-healing HA Postgres clusters powered by Patroni, Pgbouncer, HAproxy, PgBackrest, and more.
  • INFRA: Local software repo, Nginx, Grafana, Victoria, AlertManager, Blackbox Exporter—the complete observability stack.
  • NODE: Tune nodes to desired state—hostname, timezone, NTP, ssh, sudo, haproxy, docker, vector, keepalived.
  • ETCD: Distributed key-value store as DCS for HA Postgres clusters: consensus leader election/config management/service discovery.
  • REDIS: Redis servers supporting standalone primary-replica, sentinel, and cluster modes with full monitoring.
  • MINIO: S3-compatible simple object storage that can serve as an optional backup destination for PG databases.

You can declaratively compose them freely. If you only want host monitoring, installing the INFRA module on infrastructure nodes and the NODE module on managed nodes is sufficient. The ETCD and PGSQL modules are used to build HA PG clusters—installing these modules on multiple nodes automatically forms a high-availability database cluster. You can reuse Pigsty infrastructure and develop your own modules; REDIS and MINIO can serve as examples. More modules will be added—preliminary support for Mongo and MySQL is already on the roadmap.

Note that all modules depend strongly on the NODE module: in Pigsty, nodes must first have the NODE module installed to be managed before deploying other modules. When nodes (by default) use the local software repo for installation, the NODE module has a weak dependency on the INFRA module. Therefore, the admin/infrastructure nodes with the INFRA module complete the bootstrap process in the deploy.yml playbook, resolving the circular dependency.

pigsty-sandbox


Standalone Installation

By default, Pigsty installs on a single node (physical/virtual machine). The deploy.yml playbook installs INFRA, ETCD, PGSQL, and optionally MINIO modules on the current node, giving you a fully-featured observability stack (Prometheus, Grafana, Loki, AlertManager, PushGateway, BlackboxExporter, etc.), plus a built-in PostgreSQL standalone instance as a CMDB, ready to use out of the box (cluster name pg-meta, database name meta).

This node now has a complete self-monitoring system, visualization tools, and a Postgres database with PITR auto-configured (HA unavailable since you only have one node). You can use this node as a devbox, for testing, running demos, and data visualization/analysis. Or, use this node as an admin node to deploy and manage more nodes!

pigsty-arch


Monitoring

The installed standalone meta node can serve as an admin node and monitoring center to bring more nodes and database servers under its supervision and control.

Pigsty’s monitoring system can be used independently. If you want to install the Prometheus/Grafana observability stack, Pigsty provides best practices! It offers rich dashboards for host nodes and PostgreSQL databases. Whether or not these nodes or PostgreSQL servers are managed by Pigsty, with simple configuration, you immediately have a production-grade monitoring and alerting system, bringing existing hosts and PostgreSQL under management.

pigsty-dashboard.jpg


HA PostgreSQL Clusters

Pigsty helps you own your own production-grade HA PostgreSQL RDS service anywhere.

To create such an HA PostgreSQL cluster/RDS service, you simply describe it with a short config and run the playbook to create it:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: replica }
  vars: { pg_cluster: pg-test }
$ bin/pgsql-add pg-test  # Initialize cluster 'pg-test'

In less than 10 minutes, you’ll have a PostgreSQL database cluster with service access, monitoring, backup PITR, and HA fully configured.

pigsty-ha.png

Hardware failures are covered by the self-healing HA architecture provided by patroni, etcd, and haproxy—in case of primary failure, automatic failover executes within 45 seconds by default. Clients don’t need to modify config or restart applications: Haproxy uses patroni health checks for traffic distribution, and read-write requests are automatically routed to the new cluster primary, avoiding split-brain issues. This process is seamless—for example, in case of replica failure or planned switchover, clients experience only a momentary flash of the current query.

Software failures, human errors, and datacenter-level disasters are covered by pgbackrest and the optional MinIO cluster. This provides local/cloud PITR capabilities and, in case of datacenter failure, offers cross-region replication and disaster recovery.

3.1.1 - Nodes

A node is an abstraction of hardware/OS resources—physical machines, bare metal, VMs, or containers/pods.

A node is an abstraction of hardware resources and operating systems. It can be a physical machine, bare metal, virtual machine, or container/pod.

Any machine running a Linux OS (with systemd daemon) and standard CPU/memory/disk/network resources can be treated as a node.

Nodes can have modules installed. Pigsty has several node types, distinguished by which modules are deployed:

TypeDescription
Regular NodeA node managed by Pigsty
ADMIN NodeThe node that runs Ansible to issue management commands
INFRA NodeNodes with the INFRA module installed
ETCD NodeNodes with the ETCD module for DCS
MINIO NodeNodes with the MINIO module for object storage
PGSQL NodeNodes with the PGSQL module installed
Nodes with other modules…

In a singleton Pigsty deployment, multiple roles converge on one node: it serves as the regular node, admin node, infra node, ETCD node, and database node simultaneously.


Regular Node

Nodes managed by Pigsty can have modules installed. The node.yml playbook configures nodes to the desired state. A regular node may run the following services:

ComponentPortDescriptionStatus
node_exporter9100Host metrics exporterEnabled
haproxy9101HAProxy load balancer (admin port)Enabled
vector9598Log collection agentEnabled
docker9323Container runtime supportOptional
keepalivedn/aL2 VIP for node clusterOptional
keepalived_exporter9650Keepalived status monitorOptional

Here, node_exporter exposes host metrics, vector sends logs to the collection system, and haproxy provides load balancing. These three are enabled by default. Docker, keepalived, and keepalived_exporter are optional and can be enabled as needed.


ADMIN Node

A Pigsty deployment has exactly one admin node—the node that runs Ansible playbooks and issues control/deployment commands.

This node has ssh/sudo access to all other nodes. Admin node security is critical; ensure access is strictly controlled.

During single-node installation and configuration, the current node becomes the admin node. However, alternatives exist. For example, if your laptop can SSH to all managed nodes and has Ansible installed, it can serve as the admin node—though this isn’t recommended for production.

For instance, you might use your laptop to manage a Pigsty VM in the cloud. In this case, your laptop is the admin node.

In serious production environments, the admin node is typically 1-2 dedicated DBA machines. In resource-constrained setups, INFRA nodes often double as admin nodes since all INFRA nodes have Ansible installed by default.


INFRA Node

A Pigsty deployment may have 1 or more INFRA nodes; large production environments typically have 2-3.

The infra group in the inventory defines which nodes are INFRA nodes. These nodes run the INFRA module with these components:

ComponentPortDescription
nginx80/443Web UI, local software repository
grafana3000Visualization platform
victoriaMetrics8428Time-series database (metrics)
victoriaLogs9428Log collection server
victoriaTraces10428Trace collection server
vmalert8880Alerting and derived metrics
alertmanager9059Alert aggregation and routing
blackbox_exporter9115Blackbox probing (ping nodes/VIPs)
dnsmasq53Internal DNS resolution
chronyd123NTP time server
ansible-Playbook execution

Nginx serves as the module’s entry point, providing the web UI and local software repository. With multiple INFRA nodes, services on each are independent, but you can access all monitoring data sources from any INFRA node’s Grafana.

Pigsty is licensed under Apache-2.0, though embedded Grafana component uses AGPLv3.


ETCD Node

The ETCD module provides Distributed Consensus Service (DCS) for PostgreSQL high availability.

The etcd group in the inventory defines ETCD nodes. These nodes run etcd servers on two ports:

ComponentPortDescription
etcd2379ETCD key-value store (client port)
etcd2380ETCD cluster peer communication

MINIO Node

The MINIO module provides optional backup storage for PostgreSQL.

The minio group in the inventory defines MinIO nodes. These nodes run MinIO servers on:

ComponentPortDescription
minio9000MinIO S3 API endpoint
minio9001MinIO admin console

PGSQL Node

Nodes with the PGSQL module are called PGSQL nodes. Node and PostgreSQL instance have a 1:1 deployment—one PG instance per node.

PGSQL nodes can borrow identity from their PostgreSQL instance—controlled by node_id_from_pg, defaulting to true, meaning the node name is set to the PG instance name.

PGSQL nodes run these additional components beyond regular node services:

ComponentPortDescriptionStatus
postgres5432PostgreSQL database serverEnabled
pgbouncer6432PgBouncer connection poolEnabled
patroni8008Patroni HA managementEnabled
pg_exporter9630PostgreSQL metrics exporterEnabled
pgbouncer_exporter9631PgBouncer metrics exporterEnabled
pgbackrest_exporter9854pgBackRest metrics exporterEnabled
vip-managern/aBinds L2 VIP to cluster primaryOptional
{{ pg_cluster }}-primary5433HAProxy service: pooled read/writeEnabled
{{ pg_cluster }}-replica5434HAProxy service: pooled read-onlyEnabled
{{ pg_cluster }}-default5436HAProxy service: primary direct connectionEnabled
{{ pg_cluster }}-offline5438HAProxy service: offline readEnabled
{{ pg_cluster }}-<service>543xHAProxy service: custom PostgreSQL servicesCustom

The vip-manager is only enabled when users configure a PG VIP. Additional custom services can be defined in pg_services, exposed via haproxy using additional service ports.


Node Relationships

Regular nodes typically reference an INFRA node via the admin_ip parameter as their infrastructure provider. For example, with global admin_ip = 10.10.10.10, all nodes use infrastructure services at this IP.

Parameters that reference ${admin_ip}:

ParameterModuleDefault ValueDescription
repo_endpointINFRAhttp://${admin_ip}:80Software repo URL
repo_upstream.baseurlINFRAhttp://${admin_ip}/pigstyLocal repo baseurl
infra_portal.endpointINFRA${admin_ip}:<port>Nginx proxy backend
dns_recordsINFRA["${admin_ip} i.pigsty", ...]DNS records
node_default_etc_hostsNODE["${admin_ip} i.pigsty"]Default static DNS
node_etc_hostsNODE-Custom static DNS
node_dns_serversNODE["${admin_ip}"]Dynamic DNS servers
node_ntp_serversNODE-NTP servers (optional)

Typically the admin node and INFRA node coincide. With multiple INFRA nodes, the admin node is usually the first one; others serve as backups.

In large-scale production deployments, you might separate the Ansible admin node from INFRA module nodes. For example, use 1-2 small dedicated hosts under the DBA team as the control hub (ADMIN nodes), and 2-3 high-spec physical machines as monitoring infrastructure (INFRA nodes).

Typical node counts by deployment scale:

ScaleADMININFRAETCDMINIOPGSQL
Single-node11101
3-node13303
Small prod1230N
Large prod2354+N

3.1.2 - Infrastructure

Infrastructure module architecture, components, and functionality in Pigsty.

Running production-grade, highly available PostgreSQL clusters typically requires a comprehensive set of infrastructure services (foundation) for support, such as monitoring and alerting, log collection, time synchronization, DNS resolution, and local software repositories. Pigsty provides the INFRA module to address this—it’s an optional module, but we strongly recommend enabling it.


Overview

The diagram below shows the architecture of a single-node deployment. The right half represents the components included in the INFRA module:

ComponentTypeDescription
NginxWeb ServerUnified entry for WebUI, local repo, reverse proxy for internal services
RepoSoftware RepoAPT/DNF repository with all RPM/DEB packages needed for deployment
GrafanaVisualizationDisplays metrics, logs, and traces; hosts dashboards, reports, and custom data apps
VictoriaMetricsTime Series DBScrapes all metrics, Prometheus API compatible, provides VMUI query interface
VictoriaLogsLog PlatformCentralized log storage; all nodes run Vector by default, pushing logs here
VictoriaTracesTracingCollects slow SQL, service traces, and other tracing data
VMAlertAlert EngineEvaluates alerting rules, pushes events to Alertmanager
AlertManagerAlert ManagerAggregates alerts, dispatches notifications via email, Webhook, etc.
BlackboxExporterBlackbox ProbeProbes reachability of IPs/VIPs/URLs
DNSMASQDNS ServiceProvides DNS resolution for domains used within Pigsty [Optional]
ChronydTime SyncProvides NTP time synchronization to ensure consistent time across nodes [Optional]
CACertificateIssues encryption certificates within the environment
AnsibleOrchestrationBatch, declarative, agentless tool for managing large numbers of servers

pigsty-arch


Nginx

Nginx is the access entry point for all WebUI services in Pigsty, using ports 80 / 443 for HTTP/HTTPS by default. Live Demo

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10http://i.pigstyhttps://i.pigstyhttps://demo.pigsty.io

Infrastructure components with WebUIs can be exposed uniformly through Nginx, such as Grafana, VictoriaMetrics (VMUI), AlertManager, and HAProxy console. Additionally, the local software repository and other static resources are served via Nginx.

Nginx configures local web servers or reverse proxy servers based on definitions in infra_portal.

infra_portal:
  home : { domain: i.pigsty }

By default, it exposes Pigsty’s admin homepage: i.pigsty. Different endpoints on this page proxy different components:

EndpointComponentNative PortNotesPublic Demo
/Nginx80/443Homepage, local repo, file serverdemo.pigsty.io
/ui/Grafana3000Grafana dashboard entrydemo.pigsty.io/ui/
/vmetrics/VictoriaMetrics8428Time series DB Web UIdemo.pigsty.io/vmetrics/
/vlogs/VictoriaLogs9428Log DB Web UIdemo.pigsty.io/vlogs/
/vtraces/VictoriaTraces10428Tracing Web UIdemo.pigsty.io/vtraces/
/vmalert/VMAlert8880Alert rule managementdemo.pigsty.io/vmalert/
/alertmgr/AlertManager9059Alert management Web UIdemo.pigsty.io/alertmgr/
/blackbox/Blackbox9115Blackbox probe

Pigsty allows rich customization of Nginx as a local file server or reverse proxy, with self-signed or real HTTPS certificates.

For more information, see: Tutorial: Nginx—Expose Web Services via Proxy and Tutorial: Certbot—Request and Renew HTTPS Certificates


Repo

Pigsty creates a local software repository on the Infra node during installation to accelerate subsequent software installations. Live Demo

This repository defaults to the /www/pigsty directory, served by Nginx and mounted at the /pigsty path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/pigstyhttp://i.pigsty/pigstyhttps://i.pigsty/pigstyhttps://demo.pigsty.io/pigsty

Pigsty supports offline installation, which essentially pre-copies a prepared local software repository to the target environment. When Pigsty performs production deployment and needs to create a local software repository, if it finds the /www/pigsty/repo_complete marker file already exists locally, it skips downloading packages from upstream and uses existing packages directly, avoiding internet downloads.

repo

For more information, see: Config: INFRA - REPO


Grafana

Grafana is the core component of Pigsty’s monitoring system, used for visualizing metrics, logs, and various information. Live Demo

Grafana listens on port 3000 by default and is proxied via Nginx at the /ui path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/uihttp://i.pigsty/uihttps://i.pigsty/uihttps://demo.pigsty.io/ui

Pigsty provides pre-built dashboards based on VictoriaMetrics / Logs / Traces, with one-click drill-down and roll-up via URL jumps for rapid troubleshooting.

Grafana can also serve as a low-code visualization platform, so ECharts, victoriametrics-datasource, victorialogs-datasource plugins are installed by default, with Vector / Victoria datasources registered uniformly as vmetrics-*, vlogs-*, vtraces-* for easy custom dashboard extension.

dashboard

For more information, see: Config: INFRA - GRAFANA.


VictoriaMetrics

VictoriaMetrics is Pigsty’s time series database, responsible for scraping and storing all monitoring metrics. Live Demo

It listens on port 8428 by default, mounted at Nginx /vmetrics path, and also accessible via the p.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmetricshttp://p.pigstyhttps://i.pigsty/vmetricshttps://demo.pigsty.io/vmetrics

VictoriaMetrics is fully compatible with the Prometheus API, supporting PromQL queries, remote read/write protocols, and the Alertmanager API. The built-in VMUI provides an ad-hoc query interface for exploring metrics data directly, and also serves as a Grafana datasource.

vmetrics

For more information, see: Config: INFRA - VMETRICS


VictoriaLogs

VictoriaLogs is Pigsty’s log platform, centrally storing structured logs from all nodes. Live Demo

It listens on port 9428 by default, mounted at Nginx /vlogs path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vlogshttp://i.pigsty/vlogshttps://i.pigsty/vlogshttps://demo.pigsty.io/vlogs

All managed nodes run Vector Agent by default, collecting system logs, PostgreSQL logs, Patroni logs, Pgbouncer logs, etc., processing them into structured format and pushing to VictoriaLogs. The built-in Web UI supports log search and filtering, and can be integrated with Grafana’s victorialogs-datasource plugin for visual analysis.

vlogs

For more information, see: Config: INFRA - VLOGS


VictoriaTraces

VictoriaTraces is used for collecting trace data and slow SQL records. Live Demo

It listens on port 10428 by default, mounted at Nginx /vtraces path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vtraceshttp://i.pigsty/vtraceshttps://i.pigsty/vtraceshttps://demo.pigsty.io/vtraces

VictoriaTraces provides a Jaeger-compatible interface for analyzing service call chains and database slow queries. Combined with Grafana dashboards, it enables rapid identification of performance bottlenecks and root cause tracing.

For more information, see: Config: INFRA - VTRACES


VMAlert

VMAlert is the alerting rule computation engine, responsible for evaluating alert rules and pushing triggered events to Alertmanager. Live Demo

It listens on port 8880 by default, mounted at Nginx /vmalert path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/vmalerthttp://i.pigsty/vmalerthttps://i.pigsty/vmalerthttps://demo.pigsty.io/vmalert

VMAlert reads metrics data from VictoriaMetrics and periodically evaluates alerting rules. Pigsty provides pre-built alerting rules for PGSQL, NODE, REDIS, and other modules, covering common failure scenarios out of the box.

vmalert

For more information, see: Config: INFRA - VMALERT


AlertManager

AlertManager handles alert event aggregation, deduplication, grouping, and dispatch. Live Demo

It listens on port 9059 by default, mounted at Nginx /alertmgr path, and also accessible via the a.pigsty domain:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/alertmgrhttp://a.pigstyhttps://i.pigsty/alertmgrhttps://demo.pigsty.io/alertmgr

AlertManager supports multiple notification channels: email, Webhook, Slack, PagerDuty, WeChat Work, etc. Through alert routing rules, differentiated dispatch based on severity level and module type is possible, with support for silencing, inhibition, and other advanced features.

alertmanager

For more information, see: Config: INFRA - AlertManager


BlackboxExporter

Blackbox Exporter is used for active probing of target reachability, enabling blackbox monitoring.

It listens on port 9115 by default, mounted at Nginx /blackbox path:

IP Access (replace)Domain (HTTP)Domain (HTTPS)Public Demo
http://10.10.10.10/blackboxhttp://i.pigsty/blackboxhttps://i.pigsty/blackboxhttps://demo.pigsty.io/blackbox

It supports multiple probe methods including ICMP Ping, TCP ports, and HTTP/HTTPS endpoints. Useful for monitoring VIP reachability, service port availability, external dependency health, etc.—an important tool for assessing failure impact scope.

blackbox

For more information, see: Config: INFRA - BLACKBOX


Ansible

Ansible is Pigsty’s core orchestration tool; all deployment, configuration, and management operations are performed through Ansible Playbooks.

Pigsty automatically installs Ansible on the admin node (Infra node) during installation. It adopts a declarative configuration style and idempotent playbook design: the same playbook can be run repeatedly, and the system automatically converges to the desired state without side effects.

Ansible’s core advantages:

  • Agentless: Executes remotely via SSH, no additional software needed on target nodes.
  • Declarative: Describes the desired state rather than execution steps; configuration is documentation.
  • Idempotent: Multiple executions produce consistent results; supports retry after partial failures.

For more information, see: Playbooks: Pigsty Playbook


DNSMASQ

DNSMASQ provides DNS resolution on INFRA nodes, resolving domain names to their corresponding IP addresses.

DNSMASQ listens on port 53 (UDP/TCP) by default, providing DNS resolution for all nodes. Records are stored in the /infra/hosts directory.

Other modules automatically register their domain names with DNSMASQ during deployment, which you can use as needed. DNS is completely optional—Pigsty works normally without it. Client nodes can configure INFRA nodes as their DNS servers, allowing access to services via domain names without remembering IP addresses.

For more information, see: Config: INFRA - DNS and Tutorial: DNS—Configure Domain Resolution


Chronyd

Chronyd provides NTP time synchronization, ensuring consistent clocks across all nodes. It listens on port 123 (UDP) by default as the time source.

Time synchronization is critical for distributed systems: log analysis requires aligned timestamps, certificate validation depends on accurate clocks, and PostgreSQL streaming replication is sensitive to clock drift. In isolated network environments, the INFRA node can serve as an internal NTP server with other nodes synchronizing to it.

In Pigsty, all nodes run chronyd by default for time sync. The default upstream is pool.ntp.org public NTP servers. Chronyd is essentially managed by the Node module, but in isolated networks, you can use admin_ip to point to the INFRA node’s Chronyd service as the internal time source. In this case, the Chronyd service on the INFRA node serves as the internal time synchronization infrastructure.

For more information, see: Config: NODE - TIME


INFRA Node vs Regular Node

In Pigsty, the relationship between nodes and infrastructure is a weak circular dependency: node_monitor → infra → node

The NODE module itself doesn’t depend on the INFRA module, but the monitoring functionality (node_monitor) requires the monitoring platform and services provided by the infrastructure module.

Therefore, in the infra.yml and deploy playbooks, an “interleaved deployment” technique is used:

  • First, initialize the NODE module on all regular nodes, but skip monitoring config since infrastructure isn’t deployed yet.
  • Then, initialize the INFRA module on the INFRA node—monitoring is now available.
  • Finally, reconfigure monitoring on all regular nodes, connecting to the now-deployed monitoring platform.

If you don’t need “one-shot” deployment of all nodes, you can use phased deployment: initialize INFRA nodes first, then regular nodes.

How Are Nodes Coupled to Infrastructure?

Regular nodes reference an INFRA node via the admin_ip parameter as their infrastructure provider.

For example, when you configure global admin_ip = 10.10.10.10, all nodes will typically use infrastructure services at this IP.

This design allows quick, batch switching of infrastructure providers. Parameters that may reference ${admin_ip}:

ParameterModuleDefault ValueDescription
repo_endpointINFRAhttp://${admin_ip}:80Software repo URL
repo_upstream.baseurlINFRAhttp://${admin_ip}/pigstyLocal repo baseurl
infra_portal.endpointINFRA${admin_ip}:<port>Nginx proxy backend
dns_recordsINFRA["${admin_ip} i.pigsty", ...]DNS records
node_default_etc_hostsNODE["${admin_ip} i.pigsty"]Default static DNS
node_etc_hostsNODE[]Custom static DNS
node_dns_serversNODE["${admin_ip}"]Dynamic DNS servers
node_ntp_serversNODE["pool pool.ntp.org iburst"]NTP servers (optional)

For example, when a node installs software, the local repo points to the Nginx local software repository at admin_ip:80/pigsty. The DNS server also points to DNSMASQ at admin_ip:53. However, this isn’t mandatory—nodes can ignore the local repo and install directly from upstream internet sources (most single-node config templates); DNS servers can also remain unconfigured, as Pigsty has no DNS dependency.


INFRA Node vs ADMIN Node

The management-initiating ADMIN node typically coincides with the INFRA node. In single-node deployment, this is exactly the case. In multi-node deployment with multiple INFRA nodes, the admin node is usually the first in the infra group; others serve as backups. However, exceptions exist. You might separate them for various reasons:

For example, in large-scale production deployments, a classic pattern uses 1-2 dedicated management hosts (tiny VMs suffice) belonging to the DBA team as the control hub, with 2-3 high-spec physical machines (or more!) as monitoring infrastructure. Here, admin nodes are separate from infrastructure nodes. In this case, the admin_ip in your config should point to an INFRA node’s IP, not the current ADMIN node’s IP. This is for historical reasons: initially ADMIN and INFRA nodes were tightly coupled concepts, with separation capabilities evolving later, so the parameter name wasn’t changed.

Another common scenario is managing cloud nodes locally. For example, you can install Ansible on your laptop and specify cloud nodes as “managed targets.” In this case, your laptop acts as the ADMIN node, while cloud servers act as INFRA nodes.

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 , ansible_host: your_ssh_alias } } }  # <--- Use ansible_host to point to cloud node (fill in ssh alias)
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }    # SSH connection will use: ssh your_ssh_alias
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.10
    region: default

Multiple INFRA Nodes

By default, Pigsty only needs one INFRA node for most requirements. Even if the INFRA module goes down, it won’t affect database services on other nodes.

However, in production environments with high monitoring and alerting requirements, you may want multiple INFRA nodes to improve infrastructure availability. A common deployment uses two Infra nodes for redundancy, monitoring each other… or more nodes to deploy a distributed Victoria cluster for unlimited horizontal scaling.

Each Infra node is independent—Nginx points to services on the local machine. VictoriaMetrics independently scrapes metrics from all services in the environment, and logs are pushed to all VictoriaLogs collection endpoints by default. The only exception is Grafana: every Grafana instance registers all VictoriaMetrics / Logs / Traces / PostgreSQL instances as datasources. Therefore, each Grafana instance can see complete monitoring data.

If you modify Grafana—such as adding new dashboards or changing datasource configs—these changes only affect the Grafana instance on that node. To keep Grafana consistent across all nodes, use a PostgreSQL database as shared storage. See Tutorial: Configure Grafana High Availability for details.

3.1.3 - PGSQL Arch

PostgreSQL module component interactions and data flow.

The PGSQL module organizes PostgreSQL in production as clusterslogical entities composed of a group of database instances associated by primary-replica relationships.


Overview

The PGSQL module includes the following components, working together to provide production-grade PostgreSQL HA cluster services:

ComponentTypeDescription
postgresDatabaseThe world’s most advanced open-source relational database, PGSQL core
patroniHAManages PostgreSQL, coordinates failover, leader election, config changes
pgbouncerPoolLightweight connection pooling middleware, reduces overhead, adds flexibility
pgbackrestBackupFull/incremental backup and WAL archiving, supports local and object storage
pg_exporterMetricsExports PostgreSQL monitoring metrics for Prometheus scraping
pgbouncer_exporterMetricsExports Pgbouncer connection pool metrics
pgbackrest_exporterMetricsExports backup status metrics
vip-managerVIPBinds L2 VIP to current primary node for transparent failover [Optional]

The vip-manager is an on-demand component. Additionally, PGSQL uses components from other modules:

ComponentModuleTypeDescription
haproxyNODELBExposes service ports, routes traffic to primary or replicas
vectorNODELoggingCollects PostgreSQL, Patroni, Pgbouncer logs and ships to center
etcdETCDDCSDistributed consistent store for cluster metadata and leader info

By analogy, the PostgreSQL database kernel is the CPU, while the PGSQL module packages it as a complete computer. Patroni and Etcd form the HA subsystem, pgBackRest and MinIO form the backup subsystem. HAProxy, Pgbouncer, and vip-manager form the access subsystem. Various Exporters and Vector build the observability subsystem; finally, you can swap different kernel CPUs and extension cards.

SubsystemComponentsFunction
HA SubsystemPatroni + etcdFailure detection, auto-failover, config management
Access SubsystemHAProxy + Pgbouncer + vip-managerService exposure, load balancing, pooling, VIP
Backup SubsystempgBackRest (+ MinIO)Full/incremental backup, WAL archiving, PITR
Observability Subsystempg_exporter / pgbouncer_exporter / pgbackrest_exporter + VectorMetrics collection, log aggregation

Component Interaction

pigsty-arch

  • Cluster DNS is resolved by DNSMASQ on infra nodes
  • Cluster VIP is managed by vip-manager, which binds pg_vip_address to the cluster primary node.
  • Cluster services are exposed by HAProxy on nodes, different services distinguished by node ports (543x).
  • Pgbouncer is connection pooling middleware, listening on port 6432 by default, buffering connections, exposing additional metrics, and providing extra flexibility.
  • PostgreSQL listens on port 5432, providing relational database services
    • Installing PGSQL module on multiple nodes with the same cluster name automatically forms an HA cluster via streaming replication
    • PostgreSQL process is managed by patroni by default.
  • Patroni listens on port 8008 by default, supervising PostgreSQL server processes
    • Patroni starts Postgres server as child process
    • Patroni uses etcd as DCS: stores config, failure detection, and leader election.
    • Patroni provides Postgres info (e.g., primary/replica) via health checks, HAProxy uses this to distribute traffic
  • pg_exporter exposes postgres monitoring metrics on port 9630
  • pgbouncer_exporter exposes pgbouncer metrics on port 9631
  • pgBackRest uses local backup repository by default (pgbackrest_method = local)
    • If using local (default), pgBackRest creates local repository under pg_fs_bkup on primary node
    • If using minio, pgBackRest creates backup repository on dedicated MinIO cluster
  • Vector collects Postgres-related logs (postgres, pgbouncer, patroni, pgbackrest)
    • vector listens on port 9598, also exposes its own metrics to VictoriaMetrics on infra nodes
    • vector sends logs to VictoriaLogs on infra nodes

HA Subsystem

The HA subsystem consists of Patroni and etcd, responsible for PostgreSQL cluster failure detection, automatic failover, and configuration management.

How it works: Patroni runs on each node, managing the local PostgreSQL process and writing cluster state (leader, members, config) to etcd. When the primary fails, Patroni coordinates election via etcd, promoting the healthiest replica to new primary. The entire process is automatic, with RTO typically under 45 seconds.

Key Interactions:

  • PostgreSQL: Starts, stops, reloads PG as parent process, controls its lifecycle
  • etcd: External dependency, writes/watches leader key for distributed consensus and failure detection
  • HAProxy: Provides health checks via REST API (:8008), reporting instance role
  • vip-manager: Watches leader key in etcd, auto-migrates VIP

For more information, see: High Availability and Config: PGSQL - PG_BOOTSTRAP


Access Subsystem

The access subsystem consists of HAProxy, Pgbouncer, and vip-manager, responsible for service exposure, traffic routing, and connection pooling.

There are multiple access methods. A typical traffic path is: Client → DNS/VIP → HAProxy (543x) → Pgbouncer (6432) → PostgreSQL (5432)

LayerComponentPortRole
L2 VIPvip-manager-Binds L2 VIP to primary (optional)
L4 Load BalHAProxy543xService exposure, load balancing, health checks
L7 PoolPgbouncer6432Connection reuse, session management, transaction pooling

Service Ports:

  • 5433 primary: Read-write service, routes to primary Pgbouncer
  • 5434 replica: Read-only service, routes to replica Pgbouncer
  • 5436 default: Default service, direct to primary (bypasses pool)
  • 5438 offline: Offline service, direct to offline replica (ETL/analytics)

Key Features:

  • HAProxy uses Patroni REST API to determine instance role, auto-routes traffic
  • Pgbouncer uses transaction-level pooling, absorbs connection spikes, reduces PG connection overhead
  • vip-manager watches etcd leader key, auto-migrates VIP during failover

For more information, see: Service Access and Config: PGSQL - PG_ACCESS


Backup Subsystem

The backup subsystem consists of pgBackRest (optionally with MinIO as remote repository), responsible for data backup and point-in-time recovery (PITR).

Backup Types:

  • Full backup: Complete database copy
  • Incremental/differential backup: Only backs up changed data blocks
  • WAL archiving: Continuous transaction log archiving, enables any point-in-time recovery

Storage Backends:

  • local (default): Local disk, backups stored at pg_fs_bkup mount point
  • minio: S3-compatible object storage, supports centralized backup management and off-site DR

Key Interactions:

For more information, see: PITR, Backup & Recovery, and Config: PGSQL - PG_BACKUP


Observability Subsystem

The observability subsystem consists of three Exporters and Vector, responsible for metrics collection and log aggregation.

ComponentPortTargetKey Metrics
pg_exporter9630PostgreSQLSessions, transactions, replication lag, buffer hits
pgbouncer_exporter9631PgbouncerPool utilization, wait queue, hit rate
pgbackrest_exporter9854pgBackRestLatest backup time, size, type
vector9598postgres/patroni/pgbouncer logsStructured log stream

Data Flow:

  • Metrics: Exporter → VictoriaMetrics (INFRA) → Grafana dashboards
  • Logs: Vector → VictoriaLogs (INFRA) → Grafana log queries

pg_exporter / pgbouncer_exporter connect to target services via local Unix socket, decoupled from HA topology. In slim install mode, these components can be disabled.

For more information, see: Config: PGSQL - PG_MONITOR


PostgreSQL

PostgreSQL is the PGSQL module core, listening on port 5432 by default for relational database services, deployed 1:1 with nodes.

Pigsty currently supports PostgreSQL 14-18 (lifecycle major versions), installed via binary packages from the PGDG official repo. Pigsty also allows you to use other PG kernel forks to replace the default PostgreSQL kernel, and install up to 440 extension plugins on top of the PG kernel.

PostgreSQL processes are managed by default by the HA agent—Patroni. When a cluster has only one node, that instance is the primary; when the cluster has multiple nodes, other instances automatically join as replicas: through physical replication, syncing data changes from the primary in real-time. Replicas can handle read-only requests and automatically take over when the primary fails.

pigsty-ha.png

You can access PostgreSQL directly, or through HAProxy and Pgbouncer connection pool.

For more information, see: Config: PGSQL - PG_BOOTSTRAP


Patroni

Patroni is the PostgreSQL HA control component, listening on port 8008 by default.

Patroni takes over PostgreSQL startup, shutdown, configuration, and health status, writing leader and member information to etcd. It handles automatic failover, maintains replication factor, coordinates parameter changes, and provides a REST API for HAProxy, monitoring, and administrators.

HAProxy uses Patroni health check endpoints to determine instance roles and route traffic to the correct primary or replica. vip-manager monitors the leader key in etcd and automatically migrates the VIP when the primary changes.

patroni

For more information, see: Config: PGSQL - PG_BOOTSTRAP


Pgbouncer

Pgbouncer is a lightweight connection pooling middleware, listening on port 6432 by default, deployed 1:1 with PostgreSQL database and node.

Pgbouncer runs statelessly on each instance, connecting to PostgreSQL via local Unix socket, using Transaction Pooling by default for pool management, absorbing burst client connections, stabilizing database sessions, reducing lock contention, and significantly improving performance under high concurrency.

Pigsty routes production traffic (read-write service 5433 / read-only service 5434) through Pgbouncer by default, while only the default service (5436) and offline service (5438) bypass the pool for direct PostgreSQL connections.

Pool mode is controlled by pgbouncer_poolmode, defaulting to transaction (transaction-level pooling). Connection pooling can be disabled via pgbouncer_enabled.

pgbouncer.png

For more information, see: Config: PGSQL - PG_ACCESS


pgBackRest

pgBackRest is a professional PostgreSQL backup/recovery tool, one of the strongest in the PG ecosystem, supporting full/incremental/differential backup and WAL archiving.

Pigsty uses pgBackRest for PostgreSQL PITR capability, allowing you to roll back clusters to any point within the backup retention window.

pgBackRest works with PostgreSQL to create backup repositories on the primary, executing backup and archive tasks. By default, it uses local backup repository (pgbackrest_method = local), but can be configured for MinIO or other object storage for centralized backup management.

After initialization, pgbackrest_init_backup can automatically trigger the first full backup. Recovery integrates with Patroni, supporting bootstrapping replicas as new primaries or standbys.

pgbackrest

For more information, see: Backup & Recovery and Config: PGSQL - PG_BACKUP


HAProxy

HAProxy is the service entry point and load balancer, exposing multiple database service ports.

PortServiceTargetDescription
9101Admin-HAProxy statistics and admin page
5433primaryPrimary PgbouncerRead-write service, routes to primary pool
5434replicaReplica PgbouncerRead-only service, routes to replica pool
5436defaultPrimary PostgresDefault service, direct to primary (bypasses pool)
5438offlineOffline PostgresOffline service, direct to offline replica (ETL/analytics)

HAProxy uses Patroni REST API health checks to determine instance roles and route traffic to the appropriate primary or replica. Service definitions are composed from pg_default_services and pg_services.

A dedicated HAProxy node group can be specified via pg_service_provider to handle higher traffic; by default, HAProxy on local nodes publishes services.

haproxy

For more information, see: Service Access and Config: PGSQL - PG_ACCESS


vip-manager

vip-manager binds L2 VIP to the current primary node. This is an optional component; enable it if your network supports L2 VIP.

vip-manager runs on each PG node, monitoring the leader key written by Patroni in etcd, and binds pg_vip_address to the current primary node’s network interface. When cluster failover occurs, vip-manager immediately releases the VIP from the old primary and rebinds it on the new primary, switching traffic to the new primary.

This component is optional, enabled via pg_vip_enabled. When enabled, ensure all nodes are in the same VLAN; otherwise, VIP migration will fail. Public cloud networks typically don’t support L2 VIP; it’s recommended only for on-premises and private cloud environments.

node-vip

For more information, see: Tutorial: VIP Configuration and Config: PGSQL - PG_ACCESS


pg_exporter

pg_exporter exports PostgreSQL monitoring metrics, listening on port 9630 by default.

pg_exporter runs on each PG node, connecting to PostgreSQL via local Unix socket, exporting rich metrics covering sessions, buffer hits, replication lag, transaction rates, etc., scraped by VictoriaMetrics on INFRA nodes.

Collection configuration is specified by pg_exporter_config, with support for automatic database discovery (pg_exporter_auto_discovery), and tiered cache strategies via pg_exporter_cache_ttls.

You can disable this component via parameters; in slim install, this component is not enabled.

pg-exporter

For more information, see: Config: PGSQL - PG_MONITOR


pgbouncer_exporter

pgbouncer_exporter exports Pgbouncer connection pool metrics, listening on port 9631 by default.

pgbouncer_exporter uses the same pg_exporter binary but with a dedicated metrics config file, supporting pgbouncer 1.8-1.25+. pgbouncer_exporter reads Pgbouncer statistics views, providing pool utilization, wait queue, and hit rate metrics.

If Pgbouncer is disabled, this component is also disabled. In slim install, this component is not enabled.

For more information, see: Config: PGSQL - PG_MONITOR


pgbackrest_exporter

pgbackrest_exporter exports backup status metrics, listening on port 9854 by default.

pgbackrest_exporter parses pgBackRest status, generating metrics for most recent backup time, size, type, etc. Combined with alerting policies, it quickly detects expired or failed backups, ensuring data safety. Note that when there are many backups or using large network repositories, collection overhead can be significant, so pgbackrest_exporter has a default 2-minute collection interval. In the worst case, you may see the latest backup status in the monitoring system 2 minutes after a backup completes.

For more information, see: Config: PGSQL - PG_MONITOR


etcd

etcd is a distributed consistent store (DCS), providing cluster metadata storage and leader election capability for Patroni.

etcd is deployed and managed by the independent ETCD module, not part of the PGSQL module itself, but critical for PostgreSQL HA. Patroni writes cluster state, leader info, and config parameters to etcd; all nodes reach consensus through etcd. vip-manager also reads the leader key from etcd to enable automatic VIP migration.

For more information, see: ETCD Module


vector

Vector is a high-performance log collection component, deployed by the NODE module, responsible for collecting PostgreSQL-related logs.

Vector runs on nodes, tracking PostgreSQL, Pgbouncer, Patroni, and pgBackRest log directories, sending structured logs to VictoriaLogs on INFRA nodes for centralized storage and querying.

For more information, see: NODE Module

3.2 - ER Model

How Pigsty abstracts different functionality into modules, and the E-R diagrams for these modules.

The largest entity concept in Pigsty is a Deployment. The main entities and relationships (E-R diagram) in a deployment are shown below:

A deployment can also be understood as an Environment. For example, Production (Prod), User Acceptance Testing (UAT), Staging, Testing, Development (Devbox), etc. Each environment corresponds to a Pigsty inventory that describes all entities and attributes in that environment.

Typically, an environment includes shared infrastructure (INFRA), which broadly includes ETCD (HA DCS) and MINIO (centralized backup repository), serving multiple PostgreSQL database clusters (and other database module components). (Exception: there are also deployments without infrastructure)

In Pigsty, almost all database modules are organized as “Clusters”. Each cluster is an Ansible group containing several node resources. For example, PostgreSQL HA database clusters, Redis, Etcd/MinIO all exist as clusters. An environment can contain multiple clusters.

3.2.1 - E-R Model of Infra Cluster

Entity-Relationship model for INFRA infrastructure nodes in Pigsty, component composition, and naming conventions.

The INFRA module plays a special role in Pigsty: it’s not a traditional “cluster” but rather a management hub composed of a group of infrastructure nodes, providing core services for the entire Pigsty deployment. Each INFRA node is an autonomous infrastructure service unit running core components like Nginx, Grafana, and VictoriaMetrics, collectively providing observability and management capabilities for managed database clusters.

There are two core entities in Pigsty’s INFRA module:

  • Node: A server running infrastructure components—can be bare metal, VM, container, or Pod.
  • Component: Various infrastructure services running on nodes, such as Nginx, Grafana, VictoriaMetrics, etc.

INFRA nodes typically serve as Admin Nodes, the control plane of Pigsty.


Component Composition

Each INFRA node runs the following core components:

ComponentPortDescription
Nginx80/443Web portal, local repo, unified reverse proxy
Grafana3000Visualization platform, dashboards, data apps
VictoriaMetrics8428Time-series database, Prometheus API compatible
VictoriaLogs9428Log database, receives structured logs from Vector
VictoriaTraces10428Trace storage for slow SQL / request tracing
VMAlert8880Alert rule evaluator based on VictoriaMetrics
Alertmanager9059Alert aggregation and dispatch
Blackbox Exporter9115ICMP/TCP/HTTP black-box probing
DNSMASQ53DNS server for internal domain resolution
Chronyd123NTP time server

These components together form Pigsty’s observability infrastructure.


Examples

Let’s look at a concrete example with a two-node INFRA deployment:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
    10.10.10.11: { infra_seq: 2 }

The above config fragment defines a two-node INFRA deployment:

GroupDescription
infraINFRA infrastructure node group
NodeDescription
infra-110.10.10.10 INFRA node #1
infra-210.10.10.11 INFRA node #2

For production environments, deploying at least two INFRA nodes is recommended for infrastructure component redundancy.


Identity Parameters

Pigsty uses the INFRA_ID parameter group to assign deterministic identities to each INFRA module entity. One parameter is required:

ParameterTypeLevelDescriptionFormat
infra_seqintNodeINFRA node sequence, requiredNatural number, starting from 1, unique within group

With node sequence assigned at node level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Nodeinfra-{{ infra_seq }}infra-1, infra-2

The INFRA module assigns infra-N format identifiers to nodes for distinguishing multiple infrastructure nodes in the monitoring system. However, this doesn’t change the node’s hostname or system identity; nodes still use their existing hostname or IP address for identification.


Service Portal

INFRA nodes provide unified web service entry through Nginx. The infra_portal parameter defines services exposed through Nginx.

The default configuration only defines the home server:

infra_portal:
  home : { domain: i.pigsty }

Pigsty automatically configures reverse proxy endpoints for enabled components (Grafana, VictoriaMetrics, AlertManager, etc.). If you need to access these services via separate domains, you can explicitly add configurations:

infra_portal:
  home         : { domain: i.pigsty }
  grafana      : { domain: g.pigsty, endpoint: "${admin_ip}:3000", websocket: true }
  prometheus   : { domain: p.pigsty, endpoint: "${admin_ip}:8428" }   # VMUI
  alertmanager : { domain: a.pigsty, endpoint: "${admin_ip}:9059" }
DomainServiceDescription
i.pigstyHomePigsty homepage
g.pigstyGrafanaMonitoring dashboard
p.pigstyVictoriaMetricsTSDB Web UI
a.pigstyAlertmanagerAlert management UI

Accessing Pigsty services via domain names is recommended over direct IP + port.


Deployment Scale

The number of INFRA nodes depends on deployment scale and HA requirements:

ScaleINFRA NodesDescription
Dev/Test1Single-node deployment, all on one node
Small Prod1-2Single or dual node, can share with other services
Medium Prod2-3Dedicated INFRA nodes, redundant components
Large Prod3+Multiple INFRA nodes, component separation

In singleton deployment, INFRA components share the same node with PGSQL, ETCD, etc. In small-scale deployments, INFRA nodes typically also serve as “Admin Node” / backup admin node and local software repository (/www/pigsty). In larger deployments, these responsibilities can be separated to dedicated nodes.


Monitoring Label System

Pigsty’s monitoring system collects metrics from INFRA components themselves. Unlike database modules, each component in the INFRA module is treated as an independent monitoring object, distinguished by the cls (class) label.

LabelDescriptionExample
clsComponent type, each forming a “class”nginx
insInstance name, format {component}-{infra_seq}nginx-1
ipINFRA node IP running the component10.10.10.10
jobVictoriaMetrics scrape job, fixed as infrainfra

Using a two-node INFRA deployment (infra_seq: 1 and infra_seq: 2) as example, component monitoring labels are:

Componentclsins ExamplePort
Nginxnginxnginx-1, nginx-29113
Grafanagrafanagrafana-1, grafana-23000
VictoriaMetricsvmetricsvmetrics-1, vmetrics-28428
VictoriaLogsvlogsvlogs-1, vlogs-29428
VictoriaTracesvtracesvtraces-1, vtraces-210428
VMAlertvmalertvmalert-1, vmalert-28880
Alertmanageralertmanageralertmanager-1, alertmanager-29059
Blackboxblackboxblackbox-1, blackbox-29115

All INFRA component metrics use a unified job="infra" label, distinguished by the cls label:

nginx_up{cls="nginx", ins="nginx-1", ip="10.10.10.10", job="infra"}
grafana_info{cls="grafana", ins="grafana-1", ip="10.10.10.10", job="infra"}
vm_app_version{cls="vmetrics", ins="vmetrics-1", ip="10.10.10.10", job="infra"}
vlogs_rows_ingested_total{cls="vlogs", ins="vlogs-1", ip="10.10.10.10", job="infra"}
alertmanager_alerts{cls="alertmanager", ins="alertmanager-1", ip="10.10.10.10", job="infra"}

3.2.2 - E-R Model of PostgreSQL Cluster

Entity-Relationship model for PostgreSQL clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The PGSQL module organizes PostgreSQL in production as clusterslogical entities composed of a group of database instances associated by primary-replica relationships.

Each cluster is an autonomous business unit consisting of at least one primary instance, exposing capabilities through services.

There are four core entities in Pigsty’s PGSQL module:

  • Cluster: An autonomous PostgreSQL business unit serving as the top-level namespace for other entities.
  • Service: A named abstraction that exposes capabilities, routes traffic, and exposes services using node ports.
  • Instance: A single PostgreSQL server consisting of running processes and database files on a single node.
  • Node: A hardware resource abstraction running Linux + Systemd environment—can be bare metal, VM, container, or Pod.

Along with two business entities—“Database” and “Role”—these form the complete logical view as shown below:

er-pgsql


Examples

Let’s look at two concrete examples. Using the four-node Pigsty sandbox, there’s a three-node pg-test cluster:

    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica }
        10.10.10.13: { pg_seq: 3, pg_role: replica }
      vars: { pg_cluster: pg-test }

The above config fragment defines a high-availability PostgreSQL cluster with these related entities:

ClusterDescription
pg-testPostgreSQL 3-node HA cluster
InstanceDescription
pg-test-1PostgreSQL instance #1, default primary
pg-test-2PostgreSQL instance #2, initial replica
pg-test-3PostgreSQL instance #3, initial replica
ServiceDescription
pg-test-primaryRead-write service (routes to primary pgbouncer)
pg-test-replicaRead-only service (routes to replica pgbouncer)
pg-test-defaultDirect read-write service (routes to primary postgres)
pg-test-offlineOffline read service (routes to dedicated postgres)
NodeDescription
node-110.10.10.11 Node #1, hosts pg-test-1 PG instance
node-210.10.10.12 Node #2, hosts pg-test-2 PG instance
node-310.10.10.13 Node #3, hosts pg-test-3 PG instance

ha


Identity Parameters

Pigsty uses the PG_ID parameter group to assign deterministic identities to each PGSQL module entity. Three parameters are required:

ParameterTypeLevelDescriptionFormat
pg_clusterstringClusterPG cluster name, requiredValid DNS name, regex [a-zA-Z0-9-]+
pg_seqintInstancePG instance number, requiredNatural number, starting from 0 or 1, unique within cluster
pg_roleenumInstancePG instance role, requiredEnum: primary, replica, offline

With cluster name defined at cluster level and instance number/role assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ pg_cluster }}-{{ pg_seq }}pg-test-1, pg-test-2, pg-test-3
Service{{ pg_cluster }}-{{ pg_role }}pg-test-primary, pg-test-replica, pg-test-offline
NodeExplicitly specified or borrowed from PGpg-test-1, pg-test-2, pg-test-3

Because Pigsty adopts a 1:1 exclusive deployment model for nodes and PG instances, by default the host node identifier borrows from the PG instance identifier (node_id_from_pg). You can also explicitly specify nodename to override, or disable nodename_overwrite to use the current default.


Sharding Identity Parameters

When using multiple PostgreSQL clusters (sharding) to serve the same business, two additional identity parameters are used: pg_shard and pg_group.

In this case, this group of PostgreSQL clusters shares the same pg_shard name with their own pg_group numbers, like this Citus cluster:

In this case, pg_cluster cluster names are typically composed of: {{ pg_shard }}{{ pg_group }}, e.g., pg-citus0, pg-citus1, etc.

all:
  children:
    pg-citus0: # citus shard 0
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0 , pg_group: 0 }
    pg-citus1: # citus shard 1
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1 , pg_group: 1 }
    pg-citus2: # citus shard 2
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus2 , pg_group: 2 }
    pg-citus3: # citus shard 3
      hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus3 , pg_group: 3 }

Pigsty provides dedicated monitoring dashboards for horizontal sharding clusters, making it easy to compare performance and load across shards, but this requires using the above entity naming convention.

There are also other identity parameters for special scenarios, such as pg_upstream for specifying backup clusters/cascading replication upstream, gp_role for Greenplum cluster identity, pg_exporters for external monitoring instances, pg_offline_query for offline query instances, etc. See PG_ID parameter docs.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various PostgreSQL entities.

pg_up{cls="pg-test", ins="pg-test-1", ip="10.10.10.11", job="pgsql"}
pg_up{cls="pg-test", ins="pg-test-2", ip="10.10.10.12", job="pgsql"}
pg_up{cls="pg-test", ins="pg-test-3", ip="10.10.10.13", job="pgsql"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all native monitoring metrics collected by VictoriaMetrics and VictoriaLogs log streams.

The job name for collecting PostgreSQL metrics is fixed as pgsql; The job name for monitoring remote PG instances is fixed as pgrds. The job name for collecting PostgreSQL CSV logs is fixed as postgres; The job name for collecting pgbackrest logs is fixed as pgbackrest, other PG components collect logs via job: syslog.

Additionally, some entity identity labels appear in specific entity-related monitoring metrics, such as:

  • datname: Database name, if a metric belongs to a specific database.
  • relname: Table name, if a metric belongs to a specific table.
  • idxname: Index name, if a metric belongs to a specific index.
  • funcname: Function name, if a metric belongs to a specific function.
  • seqname: Sequence name, if a metric belongs to a specific sequence.
  • query: Query fingerprint, if a metric belongs to a specific query.

3.2.3 - E-R Model of Etcd Cluster

Entity-Relationship model for ETCD clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The ETCD module organizes ETCD in production as clusterslogical entities composed of a group of ETCD instances associated through the Raft consensus protocol.

Each cluster is an autonomous distributed key-value storage unit consisting of at least one ETCD instance, exposing service capabilities through client ports.

There are three core entities in Pigsty’s ETCD module:

  • Cluster: An autonomous ETCD service unit serving as the top-level namespace for other entities.
  • Instance: A single ETCD server process running on a node, participating in Raft consensus.
  • Node: A hardware resource abstraction running Linux + Systemd environment, implicitly declared.

Compared to PostgreSQL clusters, the ETCD cluster model is simpler, without Services or complex Role distinctions. All ETCD instances are functionally equivalent, electing a Leader through the Raft protocol while others become Followers. During scale-out intermediate states, non-voting Learner instance members are also allowed.


Examples

Let’s look at a concrete example with a three-node ETCD cluster:

etcd:
  hosts:
    10.10.10.10: { etcd_seq: 1 }
    10.10.10.11: { etcd_seq: 2 }
    10.10.10.12: { etcd_seq: 3 }
  vars:
    etcd_cluster: etcd

The above config fragment defines a three-node ETCD cluster with these related entities:

ClusterDescription
etcdETCD 3-node HA cluster
InstanceDescription
etcd-1ETCD instance #1
etcd-2ETCD instance #2
etcd-3ETCD instance #3
NodeDescription
10.10.10.10Node #1, hosts etcd-1 instance
10.10.10.11Node #2, hosts etcd-2 instance
10.10.10.12Node #3, hosts etcd-3 instance

Identity Parameters

Pigsty uses the ETCD parameter group to assign deterministic identities to each ETCD module entity. Two parameters are required:

ParameterTypeLevelDescriptionFormat
etcd_clusterstringClusterETCD cluster name, requiredValid DNS name, defaults to fixed etcd
etcd_seqintInstanceETCD instance number, requiredNatural number, starting from 1, unique within cluster

With cluster name defined at cluster level and instance number assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ etcd_cluster }}-{{ etcd_seq }}etcd-1, etcd-2, etcd-3

The ETCD module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address.


Ports & Protocols

Each ETCD instance listens on the following two ports:

PortParameterPurpose
2379etcd_portClient port, accessed by Patroni, vip-manager, etc.
2380etcd_peer_portPeer communication port, used for Raft consensus

ETCD clusters enable TLS encrypted communication by default and use RBAC authentication mechanism. Clients need correct certificates and passwords to access ETCD services.


Cluster Size

As a distributed coordination service, ETCD cluster size directly affects availability, requiring more than half (quorum) of nodes to be alive to maintain service.

Cluster SizeQuorumFault ToleranceUse Case
1 node10Dev, test, demo
3 nodes21Small-medium production
5 nodes32Large-scale production

Therefore, even-numbered ETCD clusters are meaningless, and clusters over five nodes are uncommon. Typical sizes are single-node, three-node, and five-node.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various ETCD entities.

etcd_up{cls="etcd", ins="etcd-1", ip="10.10.10.10", job="etcd"}
etcd_up{cls="etcd", ins="etcd-2", ip="10.10.10.11", job="etcd"}
etcd_up{cls="etcd", ins="etcd-3", ip="10.10.10.12", job="etcd"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all ETCD monitoring metrics collected by VictoriaMetrics. The job name for collecting ETCD metrics is fixed as etcd.

3.2.4 - E-R Model of MinIO Cluster

Entity-Relationship model for MinIO clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The MinIO module organizes MinIO in production as clusterslogical entities composed of a group of distributed MinIO instances, collectively providing highly available object storage services.

Each cluster is an autonomous S3-compatible object storage unit consisting of at least one MinIO instance, exposing service capabilities through the S3 API port.

There are three core entities in Pigsty’s MinIO module:

  • Cluster: An autonomous MinIO service unit serving as the top-level namespace for other entities.
  • Instance: A single MinIO server process running on a node, managing local disk storage.
  • Node: A hardware resource abstraction running Linux + Systemd environment, implicitly declared.

Additionally, MinIO has the concept of Storage Pool, used for smooth cluster scaling. A cluster can contain multiple storage pools, each composed of a group of nodes and disks.


Deployment Modes

MinIO supports three main deployment modes for different scenarios:

ModeCodeDescriptionUse Case
Single-Node Single-DriveSNSDSingle node, single data directory or diskDev, test, demo
Single-Node Multi-DriveSNMDSingle node, multiple disks, typically 4+Resource-constrained small deployments
Multi-Node Multi-DriveMNMDMultiple nodes, multiple disks per nodeProduction recommended

SNSD mode can use any directory as storage for quick experimentation; SNMD and MNMD modes require real disk mount points, otherwise startup is refused.


Examples

Let’s look at a concrete multi-node multi-drive example with a four-node MinIO cluster:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
    10.10.10.13: { minio_seq: 4 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'
    minio_node: '${minio_cluster}-${minio_seq}.pigsty'

The above config fragment defines a four-node MinIO cluster with four disks per node:

ClusterDescription
minioMinIO 4-node HA cluster
InstanceDescription
minio-1MinIO instance #1, managing 4 disks
minio-2MinIO instance #2, managing 4 disks
minio-3MinIO instance #3, managing 4 disks
minio-4MinIO instance #4, managing 4 disks
NodeDescription
10.10.10.10Node #1, hosts minio-1 instance
10.10.10.11Node #2, hosts minio-2 instance
10.10.10.12Node #3, hosts minio-3 instance
10.10.10.13Node #4, hosts minio-4 instance

Identity Parameters

Pigsty uses the MINIO parameter group to assign deterministic identities to each MinIO module entity. Two parameters are required:

ParameterTypeLevelDescriptionFormat
minio_clusterstringClusterMinIO cluster name, requiredValid DNS name, defaults to minio
minio_seqintInstanceMinIO instance number, requiredNatural number, starting from 1, unique within cluster

With cluster name defined at cluster level and instance number assigned at instance level, Pigsty automatically generates unique identifiers for each entity based on rules:

EntityGeneration RuleExample
Instance{{ minio_cluster }}-{{ minio_seq }}minio-1, minio-2, minio-3, minio-4

The MinIO module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address. The minio_node parameter generates node names for MinIO cluster internal use (written to /etc/hosts for cluster discovery), not host node identity.


Core Configuration Parameters

Beyond identity parameters, the following parameters are critical for MinIO cluster configuration:

ParameterTypeDescription
minio_datapathData directory, use {x...y} for multi-drive
minio_nodestringNode name pattern for multi-node deployment
minio_domainstringService domain, defaults to sss.pigsty

These parameters together determine MinIO’s core config MINIO_VOLUMES:

  • SNSD: Direct minio_data value, e.g., /data/minio
  • SNMD: Expanded minio_data directories, e.g., /data{1...4}
  • MNMD: Combined minio_node and minio_data, e.g., https://minio-{1...4}.pigsty:9000/data{1...4}

Ports & Services

Each MinIO instance listens on the following ports:

PortParameterPurpose
9000minio_portS3 API service port
9001minio_admin_portWeb admin console port

MinIO enables HTTPS encrypted communication by default (controlled by minio_https). This is required for backup tools like pgBackREST to access MinIO.

Multi-node MinIO clusters can be accessed through any node. Best practice is to use a load balancer (e.g., HAProxy + VIP) for unified access point.


Resource Provisioning

After MinIO cluster deployment, Pigsty automatically creates the following resources (controlled by minio_provision):

Default Buckets (defined by minio_buckets):

BucketPurpose
pgsqlPostgreSQL pgBackREST backup storage
metaMetadata storage, versioning enabled
dataGeneral data storage

Default Users (defined by minio_users):

UserDefault PasswordPolicyPurpose
pgbackrestS3User.BackuppgsqlPostgreSQL backup dedicated user
s3user_metaS3User.MetametaAccess meta bucket
s3user_dataS3User.DatadataAccess data bucket

pgbackrest is used for PostgreSQL cluster backups; s3user_meta and s3user_data are reserved users not actively used.


Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various MinIO entities.

minio_up{cls="minio", ins="minio-1", ip="10.10.10.10", job="minio"}
minio_up{cls="minio", ins="minio-2", ip="10.10.10.11", job="minio"}
minio_up{cls="minio", ins="minio-3", ip="10.10.10.12", job="minio"}
minio_up{cls="minio", ins="minio-4", ip="10.10.10.13", job="minio"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all MinIO monitoring metrics collected by VictoriaMetrics. The job name for collecting MinIO metrics is fixed as minio.

3.2.5 - E-R Model of Redis Cluster

Entity-Relationship model for Redis clusters in Pigsty, including E-R diagram, entity definitions, and naming conventions.

The Redis module organizes Redis in production as clusterslogical entities composed of a group of Redis instances deployed on one or more nodes.

Each cluster is an autonomous high-performance cache/storage unit consisting of at least one Redis instance, exposing service capabilities through ports.

There are three core entities in Pigsty’s Redis module:

  • Cluster: An autonomous Redis service unit serving as the top-level namespace for other entities.
  • Instance: A single Redis server process running on a specific port on a node.
  • Node: A hardware resource abstraction running Linux + Systemd environment, can host multiple Redis instances, implicitly declared.

Unlike PostgreSQL, Redis uses a single-node multi-instance deployment model: one physical/virtual machine node typically deploys multiple Redis instances to fully utilize multi-core CPUs. Therefore, nodes and instances have a 1:N relationship. Additionally, production typically advises against Redis instances with memory > 12GB.


Operating Modes

Redis has three different operating modes, specified by the redis_mode parameter:

ModeCodeDescriptionHA Mechanism
StandalonestandaloneClassic master-replica, default modeRequires Sentinel
SentinelsentinelHA monitoring and auto-failover for standaloneMulti-node quorum
Native ClusterclusterRedis native distributed cluster, no sentinel neededBuilt-in auto-failover
  • Standalone: Default mode, replication via replica_of parameter. Requires additional Sentinel cluster for HA.
  • Sentinel: Stores no business data, dedicated to monitoring standalone Redis clusters for auto-failover; multi-node itself provides HA.
  • Native Cluster: Data auto-sharded across multiple primaries, each can have multiple replicas, built-in HA, no sentinel needed.

Examples

Let’s look at concrete examples for each mode:

Standalone Cluster

Classic master-replica on a single node:

redis-ms:
  hosts:
    10.10.10.10:
      redis_node: 1
      redis_instances:
        6379: { }
        6380: { replica_of: '10.10.10.10 6379' }
  vars:
    redis_cluster: redis-ms
    redis_password: 'redis.ms'
    redis_max_memory: 64MB
ClusterDescription
redis-msRedis standalone cluster
NodeDescription
redis-ms-110.10.10.10 Node #1, hosts 2 instances
InstanceDescription
redis-ms-1-6379Primary instance, listening on port 6379
redis-ms-1-6380Replica instance, port 6380, replicates from 6379

Sentinel Cluster

Three sentinel instances on a single node for monitoring standalone clusters. Sentinel clusters specify monitored standalone clusters via redis_sentinel_monitor:

redis-sentinel:
  hosts:
    10.10.10.11:
      redis_node: 1
      redis_instances: { 26379: {}, 26380: {}, 26381: {} }
  vars:
    redis_cluster: redis-sentinel
    redis_password: 'redis.sentinel'
    redis_mode: sentinel
    redis_max_memory: 16MB
    redis_sentinel_monitor:
      - { name: redis-ms, host: 10.10.10.10, port: 6379, password: redis.ms, quorum: 2 }

Native Cluster

A Redis native distributed cluster with two nodes and six instances (minimum spec: 3 primaries, 3 replicas):

redis-test:
  hosts:
    10.10.10.12: { redis_node: 1, redis_instances: { 6379: {}, 6380: {}, 6381: {} } }
    10.10.10.13: { redis_node: 2, redis_instances: { 6379: {}, 6380: {}, 6381: {} } }
  vars:
    redis_cluster: redis-test
    redis_password: 'redis.test'
    redis_mode: cluster
    redis_max_memory: 32MB

This creates a 3 primary 3 replica native Redis cluster.

ClusterDescription
redis-testRedis native cluster (3P3R)
InstanceDescription
redis-test-1-6379Instance on node 1, port 6379
redis-test-1-6380Instance on node 1, port 6380
redis-test-1-6381Instance on node 1, port 6381
redis-test-2-6379Instance on node 2, port 6379
redis-test-2-6380Instance on node 2, port 6380
redis-test-2-6381Instance on node 2, port 6381
NodeDescription
redis-test-110.10.10.12 Node #1, hosts 3 instances
redis-test-210.10.10.13 Node #2, hosts 3 instances

Identity Parameters

Pigsty uses the REDIS parameter group to assign deterministic identities to each Redis module entity. Three parameters are required:

ParameterTypeLevelDescriptionFormat
redis_clusterstringClusterRedis cluster name, requiredValid DNS name, regex [a-z][a-z0-9-]*
redis_nodeintNodeRedis node number, requiredNatural number, starting from 1, unique within cluster
redis_instancesdictNodeRedis instance definition, requiredJSON object, key is port, value is instance config

With cluster name defined at cluster level and node number/instance definition assigned at node level, Pigsty automatically generates unique identifiers for each entity:

EntityGeneration RuleExample
Instance{{ redis_cluster }}-{{ redis_node }}-{{ port }}redis-ms-1-6379, redis-ms-1-6380

The Redis module does not assign additional identity to host nodes; nodes are identified by their existing hostname or IP address. redis_node is used for instance naming, not host node identity.


Instance Definition

redis_instances is a JSON object with port number as key and instance config as value:

redis_instances:
  6379: { }                                      # Primary instance, no extra config
  6380: { replica_of: '10.10.10.10 6379' }       # Replica, specify upstream primary
  6381: { replica_of: '10.10.10.10 6379' }       # Replica, specify upstream primary

Each Redis instance listens on a unique port within the node. You can choose any port number, but avoid system reserved ports (< 1024) or conflicts with Pigsty used ports. The replica_of parameter sets replication relationship in standalone mode, format '<ip> <port>', specifying upstream primary address and port.

Additionally, each Redis node runs a Redis Exporter collecting metrics from all local instances:

PortParameterPurpose
9121redis_exporter_portRedis Exporter port

Redis’s single-node multi-instance deployment model has some limitations:

  • Node Exclusive: A node can only belong to one Redis cluster, not assigned to different clusters simultaneously.
  • Port Unique: Redis instances on the same node must use different ports to avoid conflicts.
  • Password Shared: Multiple instances on the same node cannot have different passwords (redis_exporter limitation).
  • Manual HA: Standalone Redis clusters require additional Sentinel configuration for auto-failover.

Monitoring Label System

Pigsty provides an out-of-box monitoring system that uses the above identity parameters to identify various Redis entities.

redis_up{cls="redis-ms", ins="redis-ms-1-6379", ip="10.10.10.10", job="redis"}
redis_up{cls="redis-ms", ins="redis-ms-1-6380", ip="10.10.10.10", job="redis"}

For example, the cls, ins, ip labels correspond to cluster name, instance name, and node IP—the identifiers for these three core entities. They appear along with the job label in all Redis monitoring metrics collected by VictoriaMetrics. The job name for collecting Redis metrics is fixed as redis.

3.3 - Infra as Code

Pigsty uses Infrastructure as Code (IaC) philosophy to manage all components, providing declarative management for large-scale clusters.

Pigsty follows the IaC and GitOPS philosophy: use a declarative config inventory to describe the entire environment, and materialize it through idempotent playbooks.

Users describe their desired state declaratively through parameters, and playbooks idempotently adjust target nodes to reach that state. This is similar to Kubernetes CRDs & Operators, but Pigsty implements this functionality on bare metal and virtual machines through Ansible.

Pigsty was born to solve the operational management problem of ultra-large-scale PostgreSQL clusters. The idea behind it is simple — we need the ability to replicate the entire infrastructure (100+ database clusters + PG/Redis + observability) on ready servers within ten minutes. No GUI + ClickOps can complete such a complex task in such a short time, making CLI + IaC the only choice — it provides precise, efficient control.

The config inventory pigsty.yml file describes the state of the entire deployment. Whether it’s production (prod), staging, test, or development (devbox) environments, the difference between infrastructures lies only in the config inventory, while the deployment delivery logic is exactly the same.

You can use git for version control and auditing of this deployment “seed/gene”, and Pigsty even supports storing the config inventory as database tables in PostgreSQL CMDB, further achieving Infra as Data capability. Seamlessly integrate with your existing workflows.

IaC is designed for professional users and enterprise scenarios but is also deeply optimized for individual developers and SMBs. Even if you’re not a professional DBA, you don’t need to understand these hundreds of adjustment knobs and switches. All parameters come with well-performing default values. You can get an out-of-the-box single-node database with zero configuration; Simply add two more IP addresses to get an enterprise-grade high-availability PostgreSQL cluster.


Declare Modules

Take the following default config snippet as an example. This config describes a node 10.10.10.10 with INFRA, NODE, ETCD, and PGSQL modules installed.

# monitoring, alerting, DNS, NTP and other infrastructure cluster...
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

# minio cluster, s3 compatible object storage
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

# etcd cluster, used as DCS for PostgreSQL high availability
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

# PGSQL example cluster: pg-meta
pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary }, vars: { pg_cluster: pg-meta } }

To actually install these modules, execute the following playbooks:

./infra.yml -l 10.10.10.10  # Initialize infra module on node 10.10.10.10
./etcd.yml  -l 10.10.10.10  # Initialize etcd module on node 10.10.10.10
./minio.yml -l 10.10.10.10  # Initialize minio module on node 10.10.10.10
./pgsql.yml -l 10.10.10.10  # Initialize pgsql module on node 10.10.10.10

Declare Clusters

You can declare PostgreSQL database clusters by installing the PGSQL module on multiple nodes, making them a service unit:

For example, to deploy a three-node high-availability PostgreSQL cluster using streaming replication on the following three Pigsty-managed nodes, you can add the following definition to the all.children section of the config file pigsty.yml:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: offline }
  vars:  { pg_cluster: pg-test }

After defining, you can use playbooks to create the cluster:

bin/pgsql-add pg-test   # Create the pg-test cluster

pigsty-iac.jpg

You can use different instance roles such as primary, replica, offline, delayed, sync standby; as well as different clusters: such as standby clusters, Citus clusters, and even Redis / MinIO / Etcd clusters


Customize Cluster Content

Not only can you define clusters declaratively, but you can also define databases, users, services, and HBA rules within the cluster. For example, the following config file deeply customizes the content of the default pg-meta single-node database cluster:

Including: declaring six business databases and seven business users, adding an extra standby service (synchronous standby, providing read capability with no replication delay), defining some additional pg_hba rules, an L2 VIP address pointing to the cluster primary, and a customized backup strategy.

pg-meta:
  hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
  vars:
    pg_cluster: pg-meta
    pg_databases:                       # define business databases on this cluster, array of database definition
      - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
        baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g files/)
        pgbouncer: true                 # optional, add this database to pgbouncer database list? true by default
        schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
        extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
          - { name: postgis , schema: public }
          - { name: timescaledb }
        comment: pigsty meta database   # optional, comment string for this database
        owner: postgres                # optional, database owner, postgres by default
        template: template1            # optional, which template to use, template1 by default
        encoding: UTF8                 # optional, database encoding, UTF8 by default. (MUST same as template database)
        locale: C                      # optional, database locale, C by default.  (MUST same as template database)
        lc_collate: C                  # optional, database collate, C by default. (MUST same as template database)
        lc_ctype: C                    # optional, database ctype, C by default.   (MUST same as template database)
        tablespace: pg_default         # optional, default tablespace, 'pg_default' by default.
        allowconn: true                # optional, allow connection, true by default. false will disable connect at all
        revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
        register_datasource: true      # optional, register this database to grafana datasources? true by default
        connlimit: -1                  # optional, database connection limit, default -1 disable limit
        pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
        pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
        pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
        pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
        pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
        pool_max_db_conn: 100          # optional, max database connections at database level, default 100
      - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
      - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
      - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
      - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
      - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
    pg_users:                           # define business users/roles on this cluster, array of user definition
      - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
        password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
        login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
        superuser: false                # optional, is superuser? false by default
        createdb: false                 # optional, can create database? false by default
        createrole: false               # optional, can create role? false by default
        inherit: true                   # optional, can this role use inherited privileges? true by default
        replication: false              # optional, can this role do replication? false by default
        bypassrls: false                # optional, can this role bypass row level security? false by default
        pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
        connlimit: -1                   # optional, user connection limit, default -1 disable limit
        expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
        expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
        comment: pigsty admin user      # optional, comment string for this user/role
        roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
        parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
        pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
        pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
      - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
      - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
      - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
      - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway   }
      - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
      - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }
    pg_services:                        # extra services in addition to pg_default_services, array of service definition
      # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
      - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
        port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
        ip: "*"                         # optional, service bind ip address, `*` for all ip by default
        selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
        dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
        check: /sync                    # optional, health check url path, / by default
        backup: "[? pg_role == `primary`]"  # backup server selector
        maxconn: 3000                   # optional, max allowed front-end connection
        balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
        options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
    pg_hba_rules:
      - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.2/24
    pg_vip_interface: eth1
    node_crontab:  # make a full backup 1 am everyday
      - '00 01 * * * postgres /pg/bin/pg-backup full'

Declare Access Control

You can also deeply customize Pigsty’s access control capabilities through declarative configuration. For example, the following config file provides deep security customization for the pg-meta cluster:

Uses the three-node core cluster template: crit.yml, to ensure data consistency is prioritized with zero data loss during failover. Enables L2 VIP and restricts database and connection pool listening addresses to local loopback IP + internal network IP + VIP three specific addresses. The template enforces Patroni’s SSL API and Pgbouncer’s SSL, and in HBA rules, enforces SSL usage for accessing the database cluster. Also enables the $libdir/passwordcheck extension in pg_libs to enforce password strength security policy.

Finally, a separate pg-meta-delay cluster is declared as pg-meta’s delayed replica from one hour ago, for emergency data deletion recovery.

pg-meta:      # 3 instance postgres cluster `pg-meta`
  hosts:
    10.10.10.10: { pg_seq: 1, pg_role: primary }
    10.10.10.11: { pg_seq: 2, pg_role: replica }
    10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
  vars:
    pg_cluster: pg-meta
    pg_conf: crit.yml
    pg_users:
      - { name: dbuser_meta , password: DBUser.Meta   , pgbouncer: true , roles: [ dbrole_admin ] , comment: pigsty admin user }
      - { name: dbuser_view , password: DBUser.Viewer , pgbouncer: true , roles: [ dbrole_readonly ] , comment: read-only viewer for meta database }
    pg_databases:
      - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public}, {name: timescaledb}]}
    pg_default_service_dest: postgres
    pg_services:
      - { name: standby ,src_ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
    pg_vip_enabled: true
    pg_vip_address: 10.10.10.2/24
    pg_vip_interface: eth1
    pg_listen: '${ip},${vip},${lo}'
    patroni_ssl_enabled: true
    pgbouncer_sslmode: require
    pgbackrest_method: minio
    pg_libs: 'timescaledb, $libdir/passwordcheck, pg_stat_statements, auto_explain' # add passwordcheck extension to enforce strong password
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [pg_monitor, dbrole_readonly]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,expire_in: 7300 ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules:             # postgres host-based auth rules by default
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  }
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' }
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost'}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet' }
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet' }
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' }
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password'}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   }
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'   }
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket'}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'     }
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet'}
    pgb_default_hba_rules:            # pgbouncer host-based authentication rules
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident'}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' }
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd' }
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' }
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'   }
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   }
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd' }

# OPTIONAL delayed cluster for pg-meta
pg-meta-delay:                    # delayed instance for pg-meta (1 hour ago)
  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
  vars: { pg_cluster: pg-meta-delay }

Citus Distributed Cluster

Below is a declarative configuration for a four-node Citus distributed cluster:

all:
  children:
    pg-citus0: # citus coordinator, pg_group = 0
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0 , pg_group: 0 }
    pg-citus1: # citus data node 1
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1 , pg_group: 1 }
    pg-citus2: # citus data node 2
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus2 , pg_group: 2 }
    pg-citus3: # citus data node 3, with an extra replica
      hosts:
        10.10.10.13: { pg_seq: 1, pg_role: primary }
        10.10.10.14: { pg_seq: 2, pg_role: replica }
      vars: { pg_cluster: pg-citus3 , pg_group: 3 }
  vars:                               # global parameters for all citus clusters
    pg_mode: citus                    # pgsql cluster mode: citus
    pg_shard: pg-citus                # citus shard name: pg-citus
    patroni_citus_db: meta            # citus distributed database name
    pg_dbsu_password: DBUser.Postgres # all dbsu password access for citus cluster
    pg_users: [ { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
    pg_databases: [ { name: meta ,extensions: [ { name: citus }, { name: postgis }, { name: timescaledb } ] } ]
    pg_hba_rules:
      - { user: 'all' ,db: all  ,addr: 127.0.0.1/32 ,auth: ssl ,title: 'all user ssl access from localhost' }
      - { user: 'all' ,db: all  ,addr: intra        ,auth: ssl ,title: 'all user ssl access from intranet'  }

Redis Clusters

Below are declarative configuration examples for Redis primary-replica cluster, sentinel cluster, and Redis Cluster:

redis-ms: # redis classic primary & replica
  hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
  vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

redis-meta: # redis sentinel x 3
  hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
  vars:
    redis_cluster: redis-meta
    redis_password: 'redis.meta'
    redis_mode: sentinel
    redis_max_memory: 16MB
    redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
      - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

redis-test: # redis native cluster: 3m x 3s
  hosts:
    10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
    10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
  vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }

ETCD Cluster

Below is a declarative configuration example for a three-node Etcd cluster:

etcd: # dcs service for postgres/patroni ha consensus
  hosts:  # 1 node for testing, 3 or 5 for production
    10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
    10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
    10.10.10.12: { etcd_seq: 3 }  # odd number please
  vars: # cluster level parameter override roles/etcd
    etcd_cluster: etcd  # mark etcd cluster name etcd
    etcd_safeguard: false # safeguard against purging
    etcd_clean: true # purge etcd during init process

MinIO Cluster

Below is a declarative configuration example for a three-node MinIO cluster:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...2}'          # use two disks per node
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # node name pattern
    haproxy_services:
      - name: minio                     # [required] service name, must be unique
        port: 9002                      # [required] service port, must be unique
        options:
          - option httpchk
          - option http-keep-alive
          - http-check send meth OPTIONS uri /minio/health/live
          - http-check expect status 200
        servers:
          - { name: minio-1 ,ip: 10.10.10.10 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.11 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.12 , port: 9000 , options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

3.3.1 - Inventory

Describe your infrastructure and clusters using declarative configuration files

Every Pigsty deployment corresponds to an Inventory that describes key properties of the infrastructure and database clusters.


Configuration File

Pigsty uses Ansible YAML configuration format by default, with a single YAML configuration file pigsty.yml as the inventory.

~/pigsty
  ^---- pigsty.yml   # <---- Default configuration file

You can directly edit this configuration file to customize your deployment, or use the configure wizard script provided by Pigsty to automatically generate an appropriate configuration file.


Configuration Structure

The inventory uses standard Ansible YAML configuration format, consisting of two parts: global parameters (all.vars) and multiple groups (all.children).

You can define new clusters in all.children and describe the infrastructure using global variables: all.vars, which looks like this:

all:                  # Top-level object: all
  vars: {...}         # Global parameters
  children:           # Group definitions
    infra:            # Group definition: 'infra'
      hosts: {...}        # Group members: 'infra'
      vars:  {...}        # Group parameters: 'infra'
    etcd:    {...}    # Group definition: 'etcd'
    pg-meta: {...}    # Group definition: 'pg-meta'
    pg-test: {...}    # Group definition: 'pg-test'
    redis-test: {...} # Group definition: 'redis-test'
    # ...

Cluster Definition

Each Ansible group may represent a cluster, which can be a node cluster, PostgreSQL cluster, Redis cluster, Etcd cluster, MinIO cluster, etc.

A cluster definition consists of two parts: cluster members (hosts) and cluster parameters (vars). You can define cluster members in <cls>.hosts and describe the cluster using configuration parameters in <cls>.vars. Here’s an example of a 3-node high-availability PostgreSQL cluster definition:

all:
  children:    # Ansible group list
    pg-test:   # Ansible group name
      hosts:   # Ansible group instances (cluster members)
        10.10.10.11: { pg_seq: 1, pg_role: primary } # Host 1
        10.10.10.12: { pg_seq: 2, pg_role: replica } # Host 2
        10.10.10.13: { pg_seq: 3, pg_role: offline } # Host 3
      vars:    # Ansible group variables (cluster parameters)
        pg_cluster: pg-test

Cluster-level vars (cluster parameters) override global parameters, and instance-level vars override both cluster parameters and global parameters.


Splitting Configuration

If your deployment is large or you want to better organize configuration files, you can split the inventory into multiple files for easier management and maintenance.

inventory/
├── hosts.yml              # Host and cluster definitions
├── group_vars/
│   ├── all.yml            # Global default variables (corresponds to all.vars)
│   ├── infra.yml          # infra group variables
│   ├── etcd.yml           # etcd group variables
│   └── pg-meta.yml        # pg-meta cluster variables
└── host_vars/
    ├── 10.10.10.10.yml    # Specific host variables
    └── 10.10.10.11.yml

You can place cluster member definitions in the hosts.yml file and put cluster-level configuration parameters in corresponding files under the group_vars directory.


Switching Configuration

You can temporarily specify a different inventory file when running playbooks using the -i parameter.

./pgsql.yml -i another_config.yml
./infra.yml -i nginx_config.yml

Additionally, Ansible supports multiple configuration methods. You can use local yaml|ini configuration files, or use CMDB and any dynamic configuration scripts as configuration sources.

In Pigsty, we specify pigsty.yml in the same directory as the default inventory through ansible.cfg in the Pigsty home directory. You can modify it as needed.

[defaults]
inventory = pigsty.yml

Additionally, Pigsty supports using a CMDB metabase to store the inventory, facilitating integration with existing systems.

3.3.2 - Configure

Use the configure script to automatically generate recommended configuration files based on your environment.

Pigsty provides a configure script as a configuration wizard that automatically generates an appropriate pigsty.yml configuration file based on your current environment.

This is an optional script: if you already understand how to configure Pigsty, you can directly edit the pigsty.yml configuration file and skip the wizard.


Quick Start

Enter the pigsty source home directory and run ./configure to automatically start the configuration wizard. Without any arguments, it defaults to the meta single-node configuration template:

cd ~/pigsty
./configure          # Interactive configuration wizard, auto-detect environment and generate config

This command will use the selected template as a base, detect the current node’s IP address and region, and generate a pigsty.yml configuration file suitable for the current environment.

Features

The configure script performs the following adjustments based on environment and input, generating a pigsty.yml configuration file in the current directory.

  • Detects the current node IP address; if multiple IPs exist, prompts the user to input a primary IP address as the node’s identity
  • Uses the IP address to replace the placeholder 10.10.10.10 in the configuration template and sets it as the admin_ip parameter value
  • Detects the current region, setting region to default (global default repos) or china (using Chinese mirror repos)
  • For micro instances (vCPU < 4), uses the tiny parameter template for node_tune and pg_conf to optimize resource usage
  • If -v PG major version is specified, sets pg_version and all PG alias parameters to the corresponding major version
  • If -g is specified, replaces all default passwords with randomly generated strong passwords for enhanced security (strongly recommended)
  • When PG major version ≥ 17, prioritizes the built-in C.UTF-8 locale, or the OS-supported C.UTF-8
  • Checks if the core dependency ansible for deployment is available in the current environment
  • Also checks if the deployment target node is SSH-reachable and can execute commands with sudo (-s to skip)

Usage Examples

# Basic usage
./configure                       # Interactive configuration wizard
./configure -i 10.10.10.10        # Specify primary IP address

# Specify configuration template
./configure -c meta               # Use default single-node template (default)
./configure -c rich               # Use feature-rich single-node template
./configure -c slim               # Use minimal template (PGSQL + ETCD only)
./configure -c ha/full            # Use 4-node HA sandbox template
./configure -c ha/trio            # Use 3-node HA template
./configure -c app/supa           # Use Supabase self-hosted template

# Specify PostgreSQL version
./configure -v 17                 # Use PostgreSQL 17
./configure -v 16                 # Use PostgreSQL 16
./configure -c rich -v 16         # rich template + PG 16

# Region and proxy
./configure -r china              # Use Chinese mirrors
./configure -r europe             # Use European mirrors
./configure -x                    # Import current proxy environment variables

# Skip and automation
./configure -s                    # Skip IP detection, keep placeholder
./configure -n -i 10.10.10.10     # Non-interactive mode with specified IP
./configure -c ha/full -s         # 4-node template, skip IP replacement

# Security enhancement
./configure -g                    # Generate random passwords
./configure -c meta -g -i 10.10.10.10  # Complete production configuration

# Specify output and SSH port
./configure -o prod.yml           # Output to prod.yml
./configure -p 2222               # Use SSH port 2222

Command Arguments

./configure
    [-c|--conf <template>]      # Configuration template name (meta|rich|slim|ha/full|...)
    [-i|--ip <ipaddr>]          # Specify primary IP address
    [-v|--version <pgver>]      # PostgreSQL major version (13|14|15|16|17|18)
    [-r|--region <region>]      # Upstream software repo region (default|china|europe)
    [-o|--output <file>]        # Output configuration file path (default: pigsty.yml)
    [-s|--skip]                 # Skip IP address detection and replacement
    [-x|--proxy]                # Import proxy settings from environment variables
    [-n|--non-interactive]      # Non-interactive mode (don't ask any questions)
    [-p|--port <port>]          # Specify SSH port
    [-g|--generate]             # Generate random passwords
    [-h|--help]                 # Display help information

Argument Details

ArgumentDescription
-c, --confGenerate config from conf/<template>.yml, supports subdirectories like ha/full
-i, --ipReplace placeholder 10.10.10.10 in config template with specified IP
-v, --versionSpecify PostgreSQL major version (13-18), keeps template default if not specified
-r, --regionSet software repo mirror region: default, china (Chinese mirrors), europe (European)
-o, --outputSpecify output file path, defaults to pigsty.yml
-s, --skipSkip IP address detection and replacement, keep 10.10.10.10 placeholder in template
-x, --proxyWrite current environment proxy variables (HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, NO_PROXY) to config
-n, --non-interactiveNon-interactive mode, don’t ask any questions (requires -i to specify IP)
-p, --portSpecify SSH port (when using non-default port 22)
-g, --generateGenerate random values for passwords in config file, improving security (strongly recommended)

Execution Flow

The configure script executes detection and configuration in the following order:

┌─────────────────────────────────────────────────────────────┐
│                  configure Execution Flow                   │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  1. check_region          Detect network region (GFW check) │
│         ↓                                                   │
│  2. check_version         Validate PostgreSQL version       │
│         ↓                                                   │
│  3. check_kernel          Detect OS kernel (Linux/Darwin)   │
│         ↓                                                   │
│  4. check_machine         Detect CPU arch (x86_64/aarch64)  │
│         ↓                                                   │
│  5. check_package_manager Detect package manager (dnf/yum/apt) │
│         ↓                                                   │
│  6. check_vendor_version  Detect OS distro and version      │
│         ↓                                                   │
│  7. check_sudo            Detect passwordless sudo          │
│         ↓                                                   │
│  8. check_ssh             Detect passwordless SSH to self   │
│         ↓                                                   │
│  9. check_proxy           Handle proxy environment vars     │
│         ↓                                                   │
│ 10. check_ipaddr          Detect/input primary IP address   │
│         ↓                                                   │
│ 11. check_admin           Validate admin SSH + Sudo access  │
│         ↓                                                   │
│ 12. check_conf            Select configuration template     │
│         ↓                                                   │
│ 13. check_config          Generate configuration file       │
│         ↓                                                   │
│ 14. check_utils           Check if Ansible etc. installed   │
│         ↓                                                   │
│     ✓ Configuration complete, output pigsty.yml             │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Automatic Behaviors

Region Detection

The script automatically detects the network environment to determine if you’re in mainland China (behind GFW):

# Check network environment by accessing Google
curl -I -s --connect-timeout 1 www.google.com
  • If Google is inaccessible, automatically sets region: china to use domestic mirrors
  • If accessible, uses region: default default mirrors
  • Can manually specify region via -r argument

IP Address Handling

The script determines the primary IP address in the following priority:

  1. Command line argument: If IP is specified via -i, use it directly
  2. Single IP detection: If the current node has only one IP, use it automatically
  3. Demo IP detection: If 10.10.10.10 is detected, select it automatically (for sandbox environments)
  4. Interactive input: When multiple IPs exist, prompt user to choose or input
[WARN] Multiple IP address candidates found:
    (1) 192.168.1.100   inet 192.168.1.100/24 scope global eth0
    (2) 10.10.10.10     inet 10.10.10.10/24 scope global eth1
[ IN ] INPUT primary_ip address (of current meta node, e.g 10.10.10.10):
=> 10.10.10.10

Low-End Hardware Optimization

When CPU core count ≤ 4 is detected, the script automatically adjusts configuration:

[WARN] replace oltp template with tiny due to cpu < 4

This ensures smooth operation on low-spec virtual machines.

Locale Settings

The script automatically enables C.UTF-8 as the default locale when:

  • PostgreSQL version ≥ 17 (built-in Locale Provider support)
  • Or the current system supports C.UTF-8 / C.utf8 locale
pg_locale: C.UTF-8
pg_lc_collate: C.UTF-8
pg_lc_ctype: C.UTF-8

China Region Special Handling

When region is set to china, the script automatically:

  • Enables docker_registry_mirrors Docker mirror acceleration
  • Enables PIP_MIRROR_URL Python mirror acceleration

Password Generation

When using the -g argument, the script generates 24-character random strings for the following passwords:

Password ParameterDescription
grafana_admin_passwordGrafana admin password
pg_admin_passwordPostgreSQL admin password
pg_monitor_passwordPostgreSQL monitor user password
pg_replication_passwordPostgreSQL replication user password
patroni_passwordPatroni API password
haproxy_admin_passwordHAProxy admin password
minio_secret_keyMinIO Secret Key
etcd_root_passwordETCD Root password

It also replaces the following placeholder passwords:

  • DBUser.Meta → random password
  • DBUser.Viewer → random password
  • S3User.Backup → random password
  • S3User.Meta → random password
  • S3User.Data → random password
$ ./configure -g
[INFO] generating random passwords...
    grafana_admin_password   : xK9mL2nP4qR7sT1vW3yZ5bD8
    pg_admin_password        : aB3cD5eF7gH9iJ1kL2mN4oP6
    ...
[INFO] random passwords generated, check and save them

Configuration Templates

The script reads configuration templates from the conf/ directory, supporting the following templates:

Core Templates

TemplateDescription
metaDefault template: Single-node installation with INFRA + NODE + ETCD + PGSQL
richFeature-rich version: Includes almost all extensions, MinIO, local repo
slimMinimal version: PostgreSQL + ETCD only, no monitoring infrastructure
fatComplete version: rich base with more extensions installed
pgsqlPure PostgreSQL template
infraPure infrastructure template

HA Templates (ha/)

TemplateDescription
ha/dual2-node HA cluster
ha/trio3-node HA cluster
ha/full4-node complete sandbox environment
ha/safeSecurity-hardened HA configuration
ha/simu42-node large-scale simulation environment

Application Templates (app/)

TemplateDescription
supabaseSupabase self-hosted configuration
app/difyDify AI platform configuration
app/odooOdoo ERP configuration
app/teableTeable table database configuration
app/registryDocker Registry configuration

Special Kernel Templates

TemplateDescription
ivoryIvorySQL: Oracle-compatible PostgreSQL
mssqlBabelfish: SQL Server-compatible PostgreSQL
polarPolarDB: Alibaba Cloud open-source distributed PostgreSQL
citusCitus: Distributed PostgreSQL
orioleOrioleDB: Next-generation storage engine

Demo Templates (demo/)

TemplateDescription
demo/demoDemo environment configuration
demo/redisRedis cluster demo
demo/minioMinIO cluster demo

Output Example

$ ./configure
configure pigsty v4.0.0 begin
[ OK ] region = china
[ OK ] kernel  = Linux
[ OK ] machine = x86_64
[ OK ] package = rpm,dnf
[ OK ] vendor  = rocky (Rocky Linux)
[ OK ] version = 9 (9.5)
[ OK ] sudo = vagrant ok
[ OK ] ssh = [email protected] ok
[WARN] Multiple IP address candidates found:
    (1) 192.168.121.193	    inet 192.168.121.193/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
    (2) 10.10.10.10	    inet 10.10.10.10/24 brd 10.10.10.255 scope global noprefixroute eth1
[ OK ] primary_ip = 10.10.10.10 (from demo)
[ OK ] admin = [email protected] ok
[ OK ] mode = meta (el9)
[ OK ] locale  = C.UTF-8
[ OK ] ansible = ready
[ OK ] pigsty configured
[WARN] don't forget to check it and change passwords!
proceed with ./deploy.yml

Environment Variables

The script supports the following environment variables:

Environment VariableDescriptionDefault
PIGSTY_HOMEPigsty installation directory~/pigsty
METADB_URLMetabase connection URLservice=meta
HTTP_PROXYHTTP proxy-
HTTPS_PROXYHTTPS proxy-
ALL_PROXYUniversal proxy-
NO_PROXYProxy whitelistBuilt-in default

Notes

  1. Passwordless access: Before running configure, ensure the current user has passwordless sudo privileges and passwordless SSH to localhost. This can be automatically configured via the bootstrap script.

  2. IP address selection: Choose an internal IP as the primary IP address, not a public IP or 127.0.0.1.

  3. Password security: In production environments, always modify default passwords in the configuration file, or use the -g argument to generate random passwords.

  4. Configuration review: After the script completes, it’s recommended to review the generated pigsty.yml file to confirm the configuration meets expectations.

  5. Multiple executions: You can run configure multiple times to regenerate configuration; each run will overwrite the existing pigsty.yml.

  6. macOS limitations: When running on macOS, the script skips some Linux-specific checks and uses placeholder IP 10.10.10.10. macOS can only serve as an admin node.


FAQ

How to use a custom configuration template?

Place your configuration file in the conf/ directory, then specify it with the -c argument:

cp my-config.yml ~/pigsty/conf/myconf.yml
./configure -c myconf

How to generate different configurations for multiple clusters?

Use the -o argument to specify different output files:

./configure -c ha/full -o cluster-a.yml
./configure -c ha/trio -o cluster-b.yml

Then specify the configuration file when running playbooks:

./deploy.yml -i cluster-a.yml

How to handle multiple IPs in non-interactive mode?

You must explicitly specify the IP address using the -i argument:

./configure -n -i 10.10.10.10

How to keep the placeholder IP in the template?

Use the -s argument to skip IP replacement:

./configure -c ha/full -s   # Keep 10.10.10.10 placeholder

  • Inventory: Understand the Ansible inventory structure
  • Parameters: Understand Pigsty parameter hierarchy and priority
  • Templates: View all available configuration templates
  • Installation: Understand the complete installation process
  • Metabase: Use PostgreSQL as a dynamic configuration source

3.3.3 - Parameters

Fine-tune Pigsty customization using configuration parameters

In the inventory, you can use various parameters to fine-tune Pigsty customization. These parameters cover everything from infrastructure settings to database configuration.


Parameter List

Pigsty provides approximately 380+ configuration parameters distributed across 8 default modules for fine-grained control of various system aspects. See Reference - Parameter List for the complete list.

ModuleGroupsParamsDescription
PGSQL9123Core configuration for PostgreSQL database clusters
INFRA1082Infrastructure: repos, Nginx, DNS, monitoring, Grafana, etc.
NODE1183Host node tuning: identity, DNS, packages, tuning, security, admin, time, VIP, etc.
ETCD213Distributed configuration store and service discovery
REDIS121Redis cache and data structure server
MINIO221S3-compatible object storage service
FERRET19MongoDB-compatible database FerretDB
DOCKER18Docker container engine

Parameter Form

Parameters are key-value pairs that describe entities. The Key is a string, and the Value can be one of five types: boolean, string, number, array, or object.

all:                            # <------- Top-level object: all
  vars:
    admin_ip: 10.10.10.10       # <------- Global configuration parameter
  children:
    pg-meta:                    # <------- pg-meta group
      vars:
        pg_cluster: pg-meta     # <------- Cluster-level parameter
      hosts:
        10.10.10.10:            # <------- Host node IP
          pg_seq: 1
          pg_role: primary      # <------- Instance-level parameter

Parameter Priority

Parameters can be set at different levels with the following priority:

LevelLocationDescriptionPriority
CLI-e command line argumentPassed via command lineHighest (5)
Host/Instance<group>.hosts.<host>Parameters specific to a single hostHigher (4)
Group/Cluster<group>.varsParameters shared by hosts in group/clusterMedium (3)
Globalall.varsParameters shared by all hostsLower (2)
Default<roles>/default/main.ymlRole implementation defaultsLowest (1)

Here are some examples of parameter priority:

  • Use command line parameter -e grafana_clean=true when running playbooks to wipe Grafana data
  • Use instance-level parameter pg_role on host variables to override pg instance role
  • Use cluster-level parameter pg_cluster on group variables to override pg cluster name
  • Use global parameter node_ntp_servers on global variables to specify global NTP servers
  • If pg_version is not set, Pigsty will use the default value from the pgsql role implementation (default is 18)

Except for identity parameters, every parameter has an appropriate default value, so explicit setting is not required.


Identity Parameters

Identity parameters are special parameters that serve as entity ID identifiers, therefore they have no default values and must be explicitly set.

ModuleIdentity Parameters
PGSQLpg_cluster, pg_seq, pg_role, …
NODEnodename, node_cluster
ETCDetcd_cluster, etcd_seq
MINIOminio_cluster, minio_seq
REDISredis_cluster, redis_node, redis_instances
INFRAinfra_seq

Exceptions are etcd_cluster and minio_cluster which have default values. This assumes each deployment has only one etcd cluster for DCS and one optional MinIO cluster for centralized backup storage, so they are assigned default cluster names etcd and minio. However, you can still deploy multiple etcd or MinIO clusters using different names.

3.3.4 - Conf Templates

Use pre-made configuration templates to quickly generate configuration files adapted to your environment

In Pigsty, deployment blueprint details are defined by the inventory, which is the pigsty.yml configuration file. You can customize it through declarative configuration.

However, writing configuration files directly can be daunting for new users. To address this, we provide some ready-to-use configuration templates covering common usage scenarios.

Each template is a predefined pigsty.yml configuration file containing reasonable defaults suitable for specific scenarios.

You can choose a template as your customization starting point, then modify it as needed to meet your specific requirements.


Using Templates

Pigsty provides the configure script as an optional configuration wizard that generates an inventory with good defaults based on your environment and input.

Use ./configure -c <conf> to specify a configuration template, where <conf> is the path relative to the conf directory (the .yml suffix can be omitted).

./configure                     # Default to meta.yml configuration template
./configure -c meta             # Explicitly specify meta.yml single-node template
./configure -c rich             # Use feature-rich template with all extensions and MinIO
./configure -c slim             # Use minimal single-node template

# Use different database kernels
./configure -c pgsql            # Native PostgreSQL kernel, basic features (13~18)
./configure -c citus            # Citus distributed HA PostgreSQL (14~17)
./configure -c mssql            # Babelfish kernel, SQL Server protocol compatible (15)
./configure -c polar            # PolarDB PG kernel, Aurora/RAC style (15)
./configure -c ivory            # IvorySQL kernel, Oracle syntax compatible (18)
./configure -c mysql            # OpenHalo kernel, MySQL compatible (14)
./configure -c pgtde            # Percona PostgreSQL Server transparent encryption (18)
./configure -c oriole           # OrioleDB kernel, OLTP enhanced (17)
./configure -c supabase         # Supabase self-hosted configuration (15~18)

# Use multi-node HA templates
./configure -c ha/dual          # Use 2-node HA template
./configure -c ha/trio          # Use 3-node HA template
./configure -c ha/full          # Use 4-node HA template

If no template is specified, Pigsty defaults to the meta.yml single-node configuration template.


Template List

Main Templates

The following are single-node configuration templates for installing Pigsty on a single server:

TemplateDescription
meta.ymlDefault template, single-node PostgreSQL online installation
rich.ymlFeature-rich template with local repo, MinIO, and more examples
slim.ymlMinimal template, PostgreSQL only without monitoring and infrastructure

Database Kernel Templates

Templates for various database management systems and kernels:

TemplateDescription
pgsql.ymlNative PostgreSQL kernel, basic features (13~18)
citus.ymlCitus distributed HA PostgreSQL (14~17)
mssql.ymlBabelfish kernel, SQL Server protocol compatible (15)
polar.ymlPolarDB PG kernel, Aurora/RAC style (15)
ivory.ymlIvorySQL kernel, Oracle syntax compatible (17)
mysql.ymlOpenHalo kernel, MySQL compatible (14)
pgtde.ymlPercona PostgreSQL Server transparent encryption (17)
oriole.ymlOrioleDB kernel, OLTP enhanced (17, Debian pkg pending)
supabase.ymlSupabase self-hosted configuration (15~17)

You can add more nodes later or use HA templates to plan your cluster from the start.


HA Templates

You can configure Pigsty to run on multiple nodes, forming a high-availability (HA) cluster:

TemplateDescription
dual.yml2-node semi-HA deployment
trio.yml3-node standard HA deployment
full.yml4-node standard deployment
safe.yml4-node security-enhanced deployment with delayed replica
simu.yml20-node production environment simulation

Application Templates

You can use the following templates to run Docker applications/software:

TemplateDescription
supa.ymlStart single-node Supabase
odoo.ymlStart Odoo ERP system
dify.ymlStart Dify AI workflow system
electric.ymlStart Electric sync engine

Demo Templates

Besides main templates, Pigsty provides a set of demo templates for different scenarios:

TemplateDescription
el.ymlFull-parameter config file for EL 8/9 systems
debian.ymlFull-parameter config file for Debian/Ubuntu systems
remote.ymlExample config for monitoring remote PostgreSQL clusters or RDS
redis.ymlRedis cluster example configuration
minio.yml3-node MinIO cluster example configuration
demo.ymlConfiguration file for Pigsty public demo site

Build Templates

The following configuration templates are for development and testing purposes:

TemplateDescription
build.ymlOpen source build config for EL 9/10, Debian 12/13, Ubuntu 22.04/24.04

3.3.5 - Use CMDB as Config Inventory

Use PostgreSQL as a CMDB metabase to store Ansible inventory.

Pigsty allows you to use a PostgreSQL metabase as a dynamic configuration source, replacing static YAML configuration files for more powerful configuration management capabilities.


Overview

CMDB (Configuration Management Database) is a method of storing configuration information in a database for management.

In Pigsty, the default configuration source is a static YAML file pigsty.yml, which serves as Ansible’s inventory.

This approach is simple and direct, but when infrastructure scales and requires complex, fine-grained management and external integration, a single static file becomes insufficient.

FeatureStatic YAML FileCMDB Metabase
QueryingManual search/grepSQL queries with any conditions, aggregation analysis
VersioningDepends on Git or manual backupDatabase transactions, audit logs, time-travel snapshots
Access ControlFile system permissions, coarse-grainedPostgreSQL fine-grained access control
Concurrent EditingRequires file locking or merge conflictsDatabase transactions naturally support concurrency
External IntegrationRequires YAML parsingStandard SQL interface, easy integration with any language
ScalabilityDifficult to maintain when file becomes too largeScales to physical limits
Dynamic GenerationStatic file, changes require manual applicationImmediate effect, real-time configuration changes

Pigsty provides the CMDB database schema in the sample database pg-meta.meta schema baseline definition.


How It Works

The core idea of CMDB is to replace the static configuration file with a dynamic script. Ansible supports using executable scripts as inventory, as long as the script outputs inventory data in JSON format. When you enable CMDB, Pigsty creates a dynamic inventory script named inventory.sh:

#!/bin/bash
psql ${METADB_URL} -AXtwc 'SELECT text FROM pigsty.inventory;'

This script’s function is simple: every time Ansible needs to read the inventory, it queries configuration data from the PostgreSQL database’s pigsty.inventory view and returns it in JSON format.

The overall architecture is as follows:

flowchart LR
    conf["bin/inventory_conf"]
    tocmdb["bin/inventory_cmdb"]
    load["bin/inventory_load"]
    ansible["🚀 Ansible"]

    subgraph static["📄 Static Config Mode"]
        yml[("pigsty.yml")]
    end

    subgraph dynamic["🗄️ CMDB Dynamic Mode"]
        sh["inventory.sh"]
        cmdb[("PostgreSQL CMDB")]
    end

    conf -->|"switch"| yml
    yml -->|"load config"| load
    load -->|"write"| cmdb
    tocmdb -->|"switch"| sh
    sh --> cmdb

    yml --> ansible
    cmdb --> ansible

Data Model

The CMDB database schema is defined in files/cmdb.sql, with all objects in the pigsty schema.

Core Tables

TableDescriptionPrimary Key
pigsty.groupCluster/group definitions, corresponds to Ansible groupscls
pigsty.hostHost definitions, belongs to a group(cls, ip)
pigsty.global_varGlobal variables, corresponds to all.varskey
pigsty.group_varGroup variables, corresponds to all.children.<cls>.vars(cls, key)
pigsty.host_varHost variables, host-level variables(cls, ip, key)
pigsty.default_varDefault variable definitions, stores parameter metadatakey
pigsty.jobJob records table, records executed tasksid

Table Structure Details

Cluster Table pigsty.group

CREATE TABLE pigsty.group (
    cls     TEXT PRIMARY KEY,        -- Cluster name, primary key
    ctime   TIMESTAMPTZ DEFAULT now(), -- Creation time
    mtime   TIMESTAMPTZ DEFAULT now()  -- Modification time
);

Host Table pigsty.host

CREATE TABLE pigsty.host (
    cls    TEXT NOT NULL REFERENCES pigsty.group(cls),  -- Parent cluster
    ip     INET NOT NULL,                               -- Host IP address
    ctime  TIMESTAMPTZ DEFAULT now(),
    mtime  TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, ip)
);

Global Variables Table pigsty.global_var

CREATE TABLE pigsty.global_var (
    key   TEXT PRIMARY KEY,           -- Variable name
    value JSONB NULL,                 -- Variable value (JSON format)
    mtime TIMESTAMPTZ DEFAULT now()   -- Modification time
);

Group Variables Table pigsty.group_var

CREATE TABLE pigsty.group_var (
    cls   TEXT NOT NULL REFERENCES pigsty.group(cls),
    key   TEXT NOT NULL,
    value JSONB NULL,
    mtime TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, key)
);

Host Variables Table pigsty.host_var

CREATE TABLE pigsty.host_var (
    cls   TEXT NOT NULL,
    ip    INET NOT NULL,
    key   TEXT NOT NULL,
    value JSONB NULL,
    mtime TIMESTAMPTZ DEFAULT now(),
    PRIMARY KEY (cls, ip, key),
    FOREIGN KEY (cls, ip) REFERENCES pigsty.host(cls, ip)
);

Core Views

CMDB provides a series of views for querying and displaying configuration data:

ViewDescription
pigsty.inventoryCore view: Generates Ansible dynamic inventory JSON
pigsty.raw_configRaw configuration in JSON format
pigsty.global_configGlobal config view, merges defaults and global vars
pigsty.group_configGroup config view, includes host list and group vars
pigsty.host_configHost config view, merges group and host-level vars
pigsty.pg_clusterPostgreSQL cluster view
pigsty.pg_instancePostgreSQL instance view
pigsty.pg_databasePostgreSQL database definition view
pigsty.pg_usersPostgreSQL user definition view
pigsty.pg_servicePostgreSQL service definition view
pigsty.pg_hbaPostgreSQL HBA rules view
pigsty.pg_remoteRemote PostgreSQL instance view

pigsty.inventory is the core view that converts database configuration data to the JSON format required by Ansible:

SELECT text FROM pigsty.inventory;

Utility Scripts

Pigsty provides three convenience scripts for managing CMDB:

ScriptFunction
bin/inventory_loadLoad YAML configuration file into PostgreSQL database
bin/inventory_cmdbSwitch configuration source to CMDB (dynamic inventory script)
bin/inventory_confSwitch configuration source to static config file pigsty.yml

inventory_load

Parse and import YAML configuration file into CMDB:

bin/inventory_load                     # Load default pigsty.yml to default CMDB
bin/inventory_load -p /path/to/conf.yml  # Specify configuration file path
bin/inventory_load -d "postgres://..."   # Specify database connection URL
bin/inventory_load -n myconfig           # Specify configuration name

The script performs the following operations:

  1. Clears existing data in the pigsty schema
  2. Parses the YAML configuration file
  3. Writes global variables to the global_var table
  4. Writes cluster definitions to the group table
  5. Writes cluster variables to the group_var table
  6. Writes host definitions to the host table
  7. Writes host variables to the host_var table

Environment Variables

  • PIGSTY_HOME: Pigsty installation directory, defaults to ~/pigsty
  • METADB_URL: Database connection URL, defaults to service=meta

inventory_cmdb

Switch Ansible to use CMDB as the configuration source:

bin/inventory_cmdb

The script performs the following operations:

  1. Creates dynamic inventory script ${PIGSTY_HOME}/inventory.sh
  2. Modifies ansible.cfg to set inventory to inventory.sh

The generated inventory.sh contents:

#!/bin/bash
psql ${METADB_URL} -AXtwc 'SELECT text FROM pigsty.inventory;'

inventory_conf

Switch back to using static YAML configuration file:

bin/inventory_conf

The script modifies ansible.cfg to set inventory back to pigsty.yml.


Usage Workflow

First-time CMDB Setup

  1. Initialize CMDB schema (usually done automatically during Pigsty installation):
psql -f ~/pigsty/files/cmdb.sql
  1. Load configuration to database:
bin/inventory_load
  1. Switch to CMDB mode:
bin/inventory_cmdb
  1. Verify configuration:
ansible all --list-hosts          # List all hosts
ansible-inventory --list          # View complete inventory

Query Configuration

After enabling CMDB, you can flexibly query configuration using SQL:

-- View all clusters
SELECT cls FROM pigsty.group;

-- View all hosts in a cluster
SELECT ip FROM pigsty.host WHERE cls = 'pg-meta';

-- View global variables
SELECT key, value FROM pigsty.global_var;

-- View cluster variables
SELECT key, value FROM pigsty.group_var WHERE cls = 'pg-meta';

-- View all PostgreSQL clusters
SELECT cls, name, pg_databases, pg_users FROM pigsty.pg_cluster;

-- View all PostgreSQL instances
SELECT cls, ins, ip, seq, role FROM pigsty.pg_instance;

-- View all database definitions
SELECT cls, datname, owner, encoding FROM pigsty.pg_database;

-- View all user definitions
SELECT cls, name, login, superuser FROM pigsty.pg_users;

Modify Configuration

You can modify configuration directly via SQL:

-- Add new cluster
INSERT INTO pigsty.group (cls) VALUES ('pg-new');

-- Add cluster variable
INSERT INTO pigsty.group_var (cls, key, value)
VALUES ('pg-new', 'pg_cluster', '"pg-new"');

-- Add host
INSERT INTO pigsty.host (cls, ip) VALUES ('pg-new', '10.10.10.20');

-- Add host variables
INSERT INTO pigsty.host_var (cls, ip, key, value)
VALUES ('pg-new', '10.10.10.20', 'pg_seq', '1'),
       ('pg-new', '10.10.10.20', 'pg_role', '"primary"');

-- Modify global variable
UPDATE pigsty.global_var SET value = '"new-value"' WHERE key = 'some_param';

-- Delete cluster (cascades to hosts and variables)
DELETE FROM pigsty.group WHERE cls = 'pg-old';

Changes take effect immediately without reloading or restarting any service.

Switch Back to Static Configuration

To switch back to static configuration file mode:

bin/inventory_conf

Advanced Usage

Export Configuration

Export CMDB configuration to YAML format:

psql service=meta -AXtwc "SELECT jsonb_pretty(jsonb_build_object('all', jsonb_build_object('children', children, 'vars', vars))) FROM pigsty.raw_config;"

Or use the ansible-inventory command:

ansible-inventory --list --yaml > exported_config.yml

Configuration Auditing

Track configuration changes using the mtime field:

-- View recently modified global variables
SELECT key, value, mtime FROM pigsty.global_var
ORDER BY mtime DESC LIMIT 10;

-- View changes after a specific time
SELECT * FROM pigsty.group_var
WHERE mtime > '2024-01-01'::timestamptz;

Integration with External Systems

CMDB uses standard PostgreSQL, making it easy to integrate with other systems:

  • Web Management Interface: Expose configuration data through REST API (e.g., PostgREST)
  • CI/CD Pipelines: Read/write database directly in deployment scripts
  • Monitoring & Alerting: Generate monitoring rules based on configuration data
  • ITSM Systems: Sync with enterprise CMDB systems

Considerations

  1. Data Consistency: After modifying configuration, you need to re-run the corresponding Ansible playbooks to apply changes to the actual environment

  2. Backup: Configuration data in CMDB is critical, ensure regular backups

  3. Permissions: Configure appropriate database access permissions for CMDB to avoid accidental modifications

  4. Transactions: When making batch configuration changes, perform them within a transaction for rollback on errors

  5. Connection Pooling: The inventory.sh script creates a new connection on each execution; if Ansible runs frequently, consider using connection pooling


Summary

CMDB is Pigsty’s advanced configuration management solution, suitable for scenarios requiring large-scale cluster management, complex queries, external integration, or fine-grained access control. By storing configuration data in PostgreSQL, you can fully leverage the database’s powerful capabilities to manage infrastructure configuration.

FeatureDescription
StoragePostgreSQL pigsty schema
Dynamic Inventoryinventory.sh script
Config Loadbin/inventory_load
Switch to CMDBbin/inventory_cmdb
Switch to YAMLbin/inventory_conf
Core Viewpigsty.inventory

3.4 - High Availability

Pigsty uses Patroni to implement PostgreSQL high availability, ensuring automatic failover when the primary becomes unavailable.

Overview

Pigsty’s PostgreSQL clusters come with out-of-the-box high availability, powered by Patroni, Etcd, and HAProxy.

When your PostgreSQL cluster has two or more instances, you automatically have self-healing database high availability without any additional configuration — as long as any instance in the cluster survives, the cluster can provide complete service. Clients only need to connect to any node in the cluster to get full service without worrying about primary-replica topology changes.

With default configuration, the primary failure Recovery Time Objective (RTO) ≈ 45s, and Recovery Point Objective (RPO) < 1MB; for replica failures, RPO = 0 and RTO ≈ 0 (brief interruption). In consistency-first mode, failover can guarantee zero data loss: RPO = 0. All these metrics can be configured as needed based on your actual hardware conditions and reliability requirements.

Pigsty includes built-in HAProxy load balancers for automatic traffic switching, providing DNS/VIP/LVS and other access methods for clients. Failover and switchover are almost transparent to the business side except for brief interruptions - applications don’t need to modify connection strings or restart. The minimal maintenance window requirements bring great flexibility and convenience: you can perform rolling maintenance and upgrades on the entire cluster without application coordination. The feature that hardware failures can wait until the next day to handle lets developers, operations, and DBAs sleep well during incidents.

pigsty-ha

Many large organizations and core institutions have been using Pigsty in production for extended periods. The largest deployment has 25K CPU cores and 220+ PostgreSQL ultra-large instances (64c / 512g / 3TB NVMe SSD). In this deployment case, dozens of hardware failures and various incidents occurred over five years, yet overall availability of over 99.999% was maintained.


What problems does High Availability solve?

  • Elevates data security C/IA availability to a new level: RPO ≈ 0, RTO < 45s.
  • Gains seamless rolling maintenance capability, minimizing maintenance window requirements and bringing great convenience.
  • Hardware failures can self-heal immediately without human intervention, allowing operations and DBAs to sleep well.
  • Replicas can handle read-only requests, offloading primary load and fully utilizing resources.

What are the costs of High Availability?

  • Infrastructure dependency: HA requires DCS (etcd/zk/consul) for consensus.
  • Higher starting threshold: A meaningful HA deployment requires at least three nodes.
  • Extra resource consumption: Each new replica consumes additional resources, though this is usually not a major concern.
  • Significantly increased complexity: Backup costs increase significantly, requiring tools to manage complexity.

Limitations of High Availability

Since replication happens in real-time, all changes are immediately applied to replicas. Therefore, streaming replication-based HA solutions cannot handle data deletion or modification caused by human errors and software defects. (e.g., DROP TABLE or DELETE data) Such failures require using delayed clusters or performing point-in-time recovery using previous base backups and WAL archives.

Configuration StrategyRTORPO
Standalone + Nothing Data permanently lost, unrecoverable All data lost
Standalone + Base Backup Depends on backup size and bandwidth (hours) Lose data since last backup (hours to days)
Standalone + Base Backup + WAL Archive Depends on backup size and bandwidth (hours) Lose unarchived data (tens of MB)
Primary-Replica + Manual Failover ~10 minutes Lose data in replication lag (~100KB)
Primary-Replica + Auto Failover Within 1 minute Lose data in replication lag (~100KB)
Primary-Replica + Auto Failover + Sync Commit Within 1 minute No data loss

How It Works

In Pigsty, the high availability architecture works as follows:

  • PostgreSQL uses standard streaming replication to build physical replicas; replicas take over when the primary fails.
  • Patroni manages PostgreSQL server processes and handles high availability matters.
  • Etcd provides distributed configuration storage (DCS) capability and is used for leader election after failures.
  • Patroni relies on Etcd to reach cluster leader consensus and provides health check interfaces externally.
  • HAProxy exposes cluster services externally and uses Patroni health check interfaces to automatically distribute traffic to healthy nodes.
  • vip-manager provides an optional Layer 2 VIP, retrieves leader information from Etcd, and binds the VIP to the node where the cluster primary resides.

When the primary fails, a new round of leader election is triggered. The healthiest replica in the cluster (highest LSN position, minimum data loss) wins and is promoted to the new primary. After the winning replica is promoted, read-write traffic is immediately routed to the new primary. The impact of primary failure is brief write service unavailability: write requests will be blocked or fail directly from primary failure until new primary promotion, with unavailability typically lasting 15 to 30 seconds, usually not exceeding 1 minute.

When a replica fails, read-only traffic is routed to other replicas. Only when all replicas fail will read-only traffic ultimately be handled by the primary. The impact of replica failure is partial read-only query interruption: queries currently running on that replica will abort due to connection reset and be immediately taken over by other available replicas.

Failure detection is performed jointly by Patroni and Etcd. The cluster leader holds a lease; if the cluster leader fails to renew the lease in time (10s) due to failure, the lease is released, triggering a Failover and new cluster election.

Even without any failures, you can proactively change the cluster primary through Switchover. In this case, write queries on the primary will experience a brief interruption and be immediately routed to the new primary. This operation is typically used for rolling maintenance/upgrades of database servers.

3.4.1 - RPO Trade-offs

Trade-off analysis for RPO (Recovery Point Objective), finding the optimal balance between availability and data loss.

RPO (Recovery Point Objective) defines the maximum amount of data loss allowed when the primary fails.

For scenarios where data integrity is critical, such as financial transactions, RPO = 0 is typically required, meaning no data loss is allowed.

However, stricter RPO targets come at a cost: higher write latency, reduced system throughput, and the risk that replica failures may cause primary unavailability. For typical scenarios, some data loss is acceptable (e.g., up to 1MB) in exchange for higher availability and performance.


Trade-offs

In asynchronous replication scenarios, there is typically some replication lag between replicas and the primary (depending on network and throughput, normally in the range of 10KB-100KB / 100µs-10ms). This means when the primary fails, replicas may not have fully synchronized with the latest data. If a failover occurs, the new primary may lose some unreplicated data.

The upper limit of potential data loss is controlled by the pg_rpo parameter, which defaults to 1048576 (1MB), meaning up to 1MiB of data loss can be tolerated during failover.

When the cluster primary fails, if any replica has replication lag within this threshold, Pigsty will automatically promote that replica to be the new primary. However, when all replicas exceed this threshold, Pigsty will refuse [automatic failover] to prevent data loss. Manual intervention is then required to decide whether to wait for the primary to recover (which may never happen) or accept the data loss and force-promote a replica.

You need to configure this value based on your business requirements, making a trade-off between availability and consistency. Increasing this value improves the success rate of automatic failover but also increases the upper limit of potential data loss.

When you set pg_rpo = 0, Pigsty enables synchronous replication, ensuring the primary only returns write success after at least one replica has persisted the data. This configuration ensures zero replication lag but introduces significant write latency and reduces overall throughput.

flowchart LR
    A([Primary Failure]) --> B{Synchronous<br/>Replication?}

    B -->|No| C{Lag < RPO?}
    B -->|Yes| D{Sync Replica<br/>Available?}

    C -->|Yes| E[Lossy Auto Failover<br/>RPO < 1MB]
    C -->|No| F[Refuse Auto Failover<br/>Wait for Primary Recovery<br/>or Manual Intervention]

    D -->|Yes| G[Lossless Auto Failover<br/>RPO = 0]
    D -->|No| H{Strict Mode?}

    H -->|No| C
    H -->|Yes| F

    style A fill:#dc3545,stroke:#b02a37,color:#fff
    style E fill:#F0AD4E,stroke:#146c43,color:#fff
    style G fill:#198754,stroke:#146c43,color:#fff
    style F fill:#BE002F,stroke:#565e64,color:#fff

Protection Modes

Pigsty provides three protection modes to help users make trade-offs under different RPO requirements, similar to Oracle Data Guard protection modes.

NameMaximum PerformanceMaximum AvailabilityMaximum Protection
ReplicationAsynchronousSynchronousStrict Synchronous
Data LossPossible (replication lag)Zero normally, minor when degradedZero
Write LatencyLowestMedium (+1 network RTT)Medium (+1 network RTT)
ThroughputHighestReducedReduced
Replica Failure ImpactNoneAuto degrade, service continuesPrimary stops writes
RPO< 1MB= 0 (normal) / < 1MB (degraded)= 0
Use CaseTypical business, performance firstCritical business, safety firstFinancial core, compliance first
ConfigurationDefault configpg_rpo = 0pg_conf: crit.yml

Implementation

The three protection modes differ in how two core Patroni parameters are configured: synchronous_mode and synchronous_mode_strict:

  • synchronous_mode: Whether Patroni enables synchronous replication. If enabled, check if synchronous_mode_strict enables strict synchronous mode.
  • synchronous_mode_strict = false: Default configuration, allows degradation to async mode when replicas fail, primary continues service (Maximum Availability)
  • synchronous_mode_strict = true: Degradation forbidden, primary stops writes until sync replica recovers (Maximum Protection)
Modesynchronous_modesynchronous_mode_strictReplication ModeReplica Failure Behavior
Max Performancefalse-AsyncNo impact
Max AvailabilitytruefalseSynchronousAuto degrade to async
Max ProtectiontruetrueStrict SynchronousPrimary refuses writes

Typically, you only need to set the pg_rpo parameter to 0 to enable the synchronous_mode switch, activating Maximum Availability mode. If you use pg_conf = crit.yml template, it additionally enables the synchronous_mode_strict strict mode switch, activating Maximum Protection mode. Additionally, you can enable watchdog to fence the primary directly during node/Patroni freeze scenarios instead of degrading, achieving behavior equivalent to Oracle Maximum Protection mode.

You can also directly configure these Patroni parameters as needed. Refer to Patroni and PostgreSQL documentation to achieve stronger data protection, such as:

  • Specify the synchronous replica list, configure more sync replicas to improve disaster tolerance, use quorum synchronous commit, or even require all replicas to perform synchronous commit.
  • Configure synchronous_commit: 'remote_apply' to strictly ensure primary-replica read-write consistency. (Oracle Maximum Protection mode is equivalent to remote_write)

Recommendations

Maximum Performance mode (asynchronous replication) is the default mode used by Pigsty and is sufficient for the vast majority of workloads. Tolerating minor data loss during failures (typically in the range of a few KB to hundreds of KB) in exchange for higher throughput and availability is the recommended configuration for typical business scenarios. In this case, you can adjust the maximum allowed data loss through the pg_rpo parameter to suit different business needs.

Maximum Availability mode (synchronous replication) is suitable for scenarios with high data integrity requirements that cannot tolerate data loss. In this mode, a minimum of two-node PostgreSQL cluster (one primary, one replica) is required. Set pg_rpo to 0 to enable this mode.

Maximum Protection mode (strict synchronous replication) is suitable for financial transactions, medical records, and other scenarios with extremely high data integrity requirements. We recommend using at least a three-node cluster (one primary, two replicas), because with only two nodes, if the replica fails, the primary will stop writes, causing service unavailability, which reduces overall system reliability. With three nodes, if only one replica fails, the primary can continue to serve.

3.4.2 - Failure Model

Detailed analysis of worst-case, best-case, and average RTO calculation logic and results across three classic failure detection/recovery paths

Patroni failures can be classified into 10 categories by failure target, and further consolidated into five categories based on detection path, which are detailed in this section.

#Failure ScenarioDescriptionFinal Path
1PG process crashcrash, OOM killedActive Detection
2PG connection refusedmax_connectionsActive Detection
3PG zombieProcess alive but unresponsiveActive Detection (timeout)
4Patroni process crashkill -9, OOMPassive Detection
5Patroni zombieProcess alive but stuckWatchdog
6Node downPower outage, hardware failurePassive Detection
7Node zombieIO hang, CPU starvationWatchdog
8Primary ↔ DCS network failureFirewall, switch failureNetwork Partition
9Storage failureDisk failure, disk full, mount failureActive Detection or Watchdog
10Manual switchoverSwitchover/FailoverManual Trigger

However, for RTO calculation purposes, all failures ultimately converge to two paths. This section explores the upper bound, lower bound, and average RTO for these two scenarios.

flowchart LR
    A([Primary Failure]) --> B{Patroni<br/>Detected?}

    B -->|PG Crash| C[Attempt Local Restart]
    B -->|Node Down| D[Wait TTL Expiration]

    C -->|Success| E([Local Recovery])
    C -->|Fail/Timeout| F[Release Leader Lock]

    D --> F
    F --> G[Replica Election]
    G --> H[Execute Promote]
    H --> I[HAProxy Detects]
    I --> J([Service Restored])

    style A fill:#dc3545,stroke:#b02a37,color:#fff
    style E fill:#198754,stroke:#146c43,color:#fff
    style J fill:#198754,stroke:#146c43,color:#fff

3.4.2.1 - Model of Patroni Passive Failure

Failover path triggered by node crash causing leader lease expiration and cluster election

RTO Timeline


Failure Model

PhaseBestWorstAverageDescription
Lease Expirationttl - loopttlttl - loop/2Best: crash just before refresh
Worst: crash right after refresh
Replica Detect0looploop / 2Best: exactly at check point
Worst: just missed check point
Election Promote021Best: direct lock and promote
Worst: API timeout + Promote
HAProxy Check(rise-1) × fastinter(rise-1) × fastinter + inter(rise-1) × fastinter + inter/2Best: state change before check
Worst: state change right after check

Key Difference Between Passive and Active Failover:

ScenarioPatroni StatusLease HandlingPrimary Wait Time
Active Failover (PG crash)Alive, healthyActively tries to restart PG, releases lease on timeoutprimary_start_timeout
Passive Failover (Node crash)Dies with nodeCannot actively release, must wait for TTL expirationttl

In passive failover scenarios, Patroni dies along with the node and cannot actively release the Leader Key. The lease in DCS can only trigger cluster election after TTL naturally expires.


Timeline Analysis

Phase 1: Lease Expiration

The Patroni primary refreshes the Leader Key every loop_wait cycle, resetting TTL to the configured value.

Timeline:
     t-loop        t          t+ttl-loop    t+ttl
       |           |              |           |
    Last Refresh  Failure      Best Case   Worst Case
       |←── loop ──→|              |           |
       |←──────────── ttl ─────────────────────→|
  • Best case: Failure occurs just before lease refresh (elapsed loop since last refresh), remaining TTL = ttl - loop
  • Worst case: Failure occurs right after lease refresh, must wait full ttl
  • Average case: ttl - loop/2
Texpire={ttlloopBestttlloop/2AveragettlWorstT_{expire} = \begin{cases} ttl - loop & \text{Best} \\ ttl - loop/2 & \text{Average} \\ ttl & \text{Worst} \end{cases}

Phase 2: Replica Detection

Replicas wake up on loop_wait cycles and check the Leader Key status in DCS.

Timeline:
    Lease Expired   Replica Wakes
       |            |
       |←── 0~loop ─→|
  • Best case: Replica happens to wake when lease expires, wait 0
  • Worst case: Replica just entered sleep when lease expires, wait loop
  • Average case: loop/2
Tdetect={0Bestloop/2AverageloopWorstT_{detect} = \begin{cases} 0 & \text{Best} \\ loop/2 & \text{Average} \\ loop & \text{Worst} \end{cases}

Phase 3: Lock Contest & Promote

When replicas detect Leader Key expiration, they start the election process. The replica that acquires the Leader Key executes pg_ctl promote to become the new primary.

  1. Via REST API, parallel queries to check each replica’s replication position, typically 10ms, hardcoded 2s timeout.
  2. Compare WAL positions to determine the best candidate, replicas attempt to create Leader Key (CAS atomic operation)
  3. Execute pg_ctl promote to become primary (very fast, typically negligible)
Election Flow:
  ReplicaA ──→ Query replication position ──→ Compare ──→ Contest lock ──→ Success
  ReplicaB ──→ Query replication position ──→ Compare ──→ Contest lock ──→ Fail
  • Best case: Single replica or immediate lock acquisition and promotion, constant overhead 0.1s
  • Worst case: DCS API call timeout: 2s
  • Average case: 1s constant overhead
Telect={0.1Best1Average2WorstT_{elect} = \begin{cases} 0.1 & \text{Best} \\ 1 & \text{Average} \\ 2 & \text{Worst} \end{cases}

Phase 4: Health Check

HAProxy detects the new primary online, requiring rise consecutive successful health checks.

Detection Timeline:
  New Primary    First Check   Second Check  Third Check (UP)
     |          |           |           |
     |←─ 0~inter ─→|←─ fast ─→|←─ fast ─→|
  • Best case: New primary promoted just before check, (rise-1) × fastinter
  • Worst case: New primary promoted right after check, (rise-1) × fastinter + inter
  • Average case: (rise-1) × fastinter + inter/2
Thaproxy={(rise1)×fastinterBest(rise1)×fastinter+inter/2Average(rise1)×fastinter+interWorstT_{haproxy} = \begin{cases} (rise-1) \times fastinter & \text{Best} \\ (rise-1) \times fastinter + inter/2 & \text{Average} \\ (rise-1) \times fastinter + inter & \text{Worst} \end{cases}

RTO Formula

Sum all phase times to get total RTO:

Best Case

RTOmin=ttlloop+0.1+(rise1)×fastinterRTO_{min} = ttl - loop + 0.1 + (rise-1) \times fastinter

Average Case

RTOavg=ttl+1+inter/2+(rise1)×fastinterRTO_{avg} = ttl + 1 + inter/2 + (rise-1) \times fastinter

Worst Case

RTOmax=ttl+loop+2+inter+(rise1)×fastinterRTO_{max} = ttl + loop + 2 + inter + (rise-1) \times fastinter

Model Calculation

Substitute the four RTO model parameters into the formulas above:

pg_rto_plan:  # [ttl, loop, retry, start, margin, inter, fastinter, downinter, rise, fall]
  fast: [ 20  ,5  ,5  ,15 ,5  ,'1s' ,'0.5s' ,'1s' ,3 ,3 ]  # rto < 30s
  norm: [ 30  ,5  ,10 ,25 ,5  ,'2s' ,'1s'   ,'2s' ,3 ,3 ]  # rto < 45s
  safe: [ 60  ,10 ,20 ,45 ,10 ,'3s' ,'1.5s' ,'3s' ,3 ,3 ]  # rto < 90s
  wide: [ 120 ,20 ,30 ,95 ,15 ,'4s' ,'2s'   ,'4s' ,3 ,3 ]  # rto < 150s

Four Mode Calculation Results (unit: seconds, format: min / avg / max)

Phasefastnormsafewide
Lease Expiration15 / 17 / 2025 / 27 / 3050 / 55 / 60100 / 110 / 120
Replica Detection0 / 3 / 50 / 3 / 50 / 5 / 100 / 10 / 20
Lock Contest & Promote0 / 1 / 20 / 1 / 20 / 1 / 20 / 1 / 2
Health Check1 / 2 / 22 / 3 / 43 / 5 / 64 / 6 / 8
Total16 / 23 / 2927 / 34 / 4153 / 66 / 78104 / 127 / 150

3.4.2.2 - Model of Patroni Active Failure

PostgreSQL primary process crashes while Patroni stays alive and attempts restart, triggering failover after timeout

RTO Timeline


Failure Model

ItemBestWorstAverageDescription
Crash Found0looploop/2Best: PG crashes right before check
Worst: PG crashes right after check
Restart Timeout0startstartBest: PG recovers instantly
Worst: Wait full start timeout before releasing lease
Replica Detect0looploop/2Best: Right at check point
Worst: Just missed check point
Elect Promote021Best: Acquire lock and promote directly
Worst: API timeout + Promote
HAProxy Check(rise-1) × fastinter(rise-1) × fastinter + inter(rise-1) × fastinter + inter/2Best: State changes before check
Worst: State changes right after check

Key Difference Between Active and Passive Failure:

ScenarioPatroni StatusLease HandlingMain Wait Time
Active Failure (PG crash)Alive, healthyActively tries to restart PG, releases lease after timeoutprimary_start_timeout
Passive Failure (node down)Dies with nodeCannot actively release, must wait for TTL expiryttl

In active failure scenarios, Patroni remains alive and can actively detect PG crash and attempt restart. If restart succeeds, service self-heals; if timeout expires without recovery, Patroni actively releases the Leader Key, triggering cluster election.


Timing Analysis

Phase 1: Failure Detection

Patroni checks PostgreSQL status every loop_wait cycle (via pg_isready or process check).

Timeline:
    Last check      PG crash      Next check
       |              |              |
       |←── 0~loop ──→|              |
  • Best case: PG crashes right before Patroni check, detected immediately, wait 0
  • Worst case: PG crashes right after check, wait for next cycle, wait loop
  • Average case: loop/2
Tdetect={0Bestloop/2AverageloopWorstT_{detect} = \begin{cases} 0 & \text{Best} \\ loop/2 & \text{Average} \\ loop & \text{Worst} \end{cases}

Phase 2: Restart Timeout

After Patroni detects PG crash, it attempts to restart PostgreSQL. This phase has two possible outcomes:

Timeline:
  Crash detected     Restart attempt     Success/Timeout
      |                  |                    |
      |←──── 0 ~ start ─────────────────────→|

Path A: Self-healing Success (Best case)

  • PG restarts successfully, service recovers
  • No failover triggered, extremely short RTO
  • Wait time: 0 (relative to Failover path)

Path B: Failover Required (Average/Worst case)

  • PG still not recovered after primary_start_timeout
  • Patroni actively releases Leader Key
  • Wait time: start
Trestart={0Best (self-healing success)startAverage (failover required)startWorstT_{restart} = \begin{cases} 0 & \text{Best (self-healing success)} \\ start & \text{Average (failover required)} \\ start & \text{Worst} \end{cases}

Note: Average case assumes failover is required. If PG can quickly self-heal, overall RTO will be significantly lower.

Phase 3: Standby Detection

Standbys wake up on loop_wait cycle and check Leader Key status in DCS. When primary Patroni releases the Leader Key, standbys discover this and begin election.

Timeline:
    Lease released    Standby wakes
       |                  |
       |←── 0~loop ──────→|
  • Best case: Standby wakes right when lease is released, wait 0
  • Worst case: Standby just went to sleep when lease released, wait loop
  • Average case: loop/2
Tstandby={0Bestloop/2AverageloopWorstT_{standby} = \begin{cases} 0 & \text{Best} \\ loop/2 & \text{Average} \\ loop & \text{Worst} \end{cases}

Phase 4: Lock & Promote

After standbys discover Leader Key vacancy, election begins. The standby that acquires the Leader Key executes pg_ctl promote to become the new primary.

  1. Via REST API, parallel queries to check each standby’s replication position, typically 10ms, hardcoded 2s timeout.
  2. Compare WAL positions to determine best candidate, standbys attempt to create Leader Key (CAS atomic operation)
  3. Execute pg_ctl promote to become primary (very fast, typically negligible)
Election process:
  StandbyA ──→ Query replication position ──→ Compare ──→ Try lock ──→ Success
  StandbyB ──→ Query replication position ──→ Compare ──→ Try lock ──→ Fail
  • Best case: Single standby or direct lock acquisition and promote, constant overhead 0.1s
  • Worst case: DCS API call timeout: 2s
  • Average case: 1s constant overhead
Telect={0.1Best1Average2WorstT_{elect} = \begin{cases} 0.1 & \text{Best} \\ 1 & \text{Average} \\ 2 & \text{Worst} \end{cases}

Phase 5: Health Check

HAProxy detects new primary online, requires rise consecutive successful health checks.

Check timeline:
  New primary    First check    Second check   Third check (UP)
     |              |               |               |
     |←─ 0~inter ──→|←─── fast ────→|←─── fast ────→|
  • Best case: New primary comes up right at check time, (rise-1) × fastinter
  • Worst case: New primary comes up right after check, (rise-1) × fastinter + inter
  • Average case: (rise-1) × fastinter + inter/2
Thaproxy={(rise1)×fastinterBest(rise1)×fastinter+inter/2Average(rise1)×fastinter+interWorstT_{haproxy} = \begin{cases} (rise-1) \times fastinter & \text{Best} \\ (rise-1) \times fastinter + inter/2 & \text{Average} \\ (rise-1) \times fastinter + inter & \text{Worst} \end{cases}

RTO Formula

Sum all phase times to get total RTO:

Best Case (PG instant self-healing)

RTOmin=0+0+0+0.1+(rise1)×fastinter(rise1)×fastinterRTO_{min} = 0 + 0 + 0 + 0.1 + (rise-1) \times fastinter \approx (rise-1) \times fastinter

Average Case (Failover required)

RTOavg=loop+start+1+inter/2+(rise1)×fastinterRTO_{avg} = loop + start + 1 + inter/2 + (rise-1) \times fastinter

Worst Case

RTOmax=loop×2+start+2+inter+(rise1)×fastinterRTO_{max} = loop \times 2 + start + 2 + inter + (rise-1) \times fastinter

Model Calculation

Substituting the four RTO model parameters into the formulas above:

pg_rto_plan:  # [ttl, loop, retry, start, margin, inter, fastinter, downinter, rise, fall]
  fast: [ 20  ,5  ,5  ,15 ,5  ,'1s' ,'0.5s' ,'1s' ,3 ,3 ]  # rto < 30s
  norm: [ 30  ,5  ,10 ,25 ,5  ,'2s' ,'1s'   ,'2s' ,3 ,3 ]  # rto < 45s
  safe: [ 60  ,10 ,20 ,45 ,10 ,'3s' ,'1.5s' ,'3s' ,3 ,3 ]  # rto < 90s
  wide: [ 120 ,20 ,30 ,95 ,15 ,'4s' ,'2s'   ,'4s' ,3 ,3 ]  # rto < 150s

Calculation Results for Four Modes (unit: seconds, format: min / avg / max)

Phasefastnormsafewide
Failure Detection0 / 3 / 50 / 3 / 50 / 5 / 100 / 10 / 20
Restart Timeout0 / 15 / 150 / 25 / 250 / 45 / 450 / 95 / 95
Standby Detection0 / 3 / 50 / 3 / 50 / 5 / 100 / 10 / 20
Lock & Promote0 / 1 / 20 / 1 / 20 / 1 / 20 / 1 / 2
Health Check1 / 2 / 22 / 3 / 43 / 5 / 64 / 6 / 8
Total1 / 24 / 292 / 35 / 413 / 61 / 734 / 122 / 145

Comparison with Passive Failure

PhaseActive Failure (PG crash)Passive Failure (node down)Description
Detection MechanismPatroni active detectionTTL passive expiryActive detection discovers failure faster
Core Waitstartttlstart is usually less than ttl, but requires additional failure detection time
Lease HandlingActive releasePassive expiryActive release is more timely
Self-healing PossibleYesNoActive detection can attempt local recovery

RTO Comparison (Average case):

ModeActive Failure (PG crash)Passive Failure (node down)Difference
fast24s23s+1s
norm35s34s+1s
safe61s66s-5s
wide122s127s-5s

Analysis: In fast and norm modes, active failure RTO is slightly higher than passive failure because it waits for primary_start_timeout (start); but in safe and wide modes, since start < ttl - loop, active failure is actually faster. However, active failure has the possibility of self-healing, with potentially extremely short RTO in best case scenarios.

3.4.3 - RTO Trade-offs

Trade-off analysis for RTO (Recovery Time Objective), finding the optimal balance between recovery speed and false failover risk.

RTO (Recovery Time Objective) defines the maximum time required for the system to restore write capability when the primary fails.

For critical transaction systems where availability is paramount, the shortest possible RTO is typically required, such as under one minute.

However, shorter RTO comes at a cost: increased false failover risk. Network jitter may be misinterpreted as a failure, leading to unnecessary failovers. For cross-datacenter/cross-region deployments, RTO requirements are typically relaxed (e.g., 1-2 minutes) to reduce false failover risk.


Trade-offs

The upper limit of unavailability during failover is controlled by the pg_rto parameter. Pigsty provides four preset RTO modes: fast, norm, safe, wide, each optimized for different network conditions and deployment scenarios. The default is norm mode (~45 seconds). You can also specify the RTO upper limit directly in seconds, and the system will automatically map to the closest mode.

When the primary fails, the entire recovery process involves multiple phases: Patroni detects the failure, DCS lock expires, new primary election, promote execution, HAProxy detects the new primary. Reducing RTO means shortening the timeout for each phase, which makes the cluster more sensitive to network jitter, thereby increasing false failover risk.

You need to choose the appropriate mode based on actual network conditions, balancing recovery speed and false failover risk. The worse the network quality, the more conservative mode you should choose; the better the network quality, the more aggressive mode you can choose.

flowchart LR
    A([Primary Failure]) --> B{Patroni<br/>Detected?}

    B -->|PG Crash| C[Attempt Local Restart]
    B -->|Node Down| D[Wait TTL Expiration]

    C -->|Success| E([Local Recovery])
    C -->|Fail/Timeout| F[Release Leader Lock]

    D --> F
    F --> G[Replica Election]
    G --> H[Execute Promote]
    H --> I[HAProxy Detects]
    I --> J([Service Restored])

    style A fill:#dc3545,stroke:#b02a37,color:#fff
    style E fill:#198754,stroke:#146c43,color:#fff
    style J fill:#198754,stroke:#146c43,color:#fff

Four Modes

Pigsty provides four RTO modes to help users make trade-offs under different network conditions.

Namefastnormsafewide
Use CaseSame rackSame datacenter (default)Same region, cross-DCCross-region/continent
Network< 1ms, very stable1-5ms, normal10-50ms, cross-DC100-200ms, public network
Target RTO30s45s90s150s
False Failover RiskHigherMediumLowerVery Low
Configurationpg_rto: fastpg_rto: normpg_rto: safepg_rto: wide

RTO Timeline

Patroni / PG HA has two key failure paths: active failure detection (Patroni detects a PG crash and attempts restart) and passive lease expiration (node down waits for TTL expiration to trigger election).


Implementation

The four RTO modes differ in how the following 10 Patroni and HAProxy HA-related parameters are configured.

ComponentParameterfastnormsafewideDescription
patronittl203060120Leader lock TTL (seconds)
loop_wait551020HA loop check interval (seconds)
retry_timeout5102030DCS operation retry timeout (seconds)
primary_start_timeout15254595Primary restart wait time (seconds)
safety_margin551015Watchdog safety margin (seconds)
haproxyinter1s2s3s4sNormal state check interval
fastinter0.5s1s1.5s2sState transition check interval
downinter1s2s3s4sDOWN state check interval
rise3333Consecutive successes to mark UP
fall3333Consecutive failures to mark DOWN

Patroni Parameters

  • ttl: Leader lock TTL. Primary must renew within this time, otherwise lock expires and triggers election. Directly determines passive failure detection delay.
  • loop_wait: Patroni main loop interval. Each loop performs one health check and state sync, affects failure discovery timeliness.
  • retry_timeout: DCS operation retry timeout. During network partition, Patroni retries continuously within this period; after timeout, primary actively demotes to prevent split-brain.
  • primary_start_timeout: Wait time for Patroni to attempt local restart after PG crash. After timeout, releases Leader lock and triggers failover.
  • safety_margin: Watchdog safety margin. Ensures sufficient time to trigger system restart during failures, avoiding split-brain.

HAProxy Parameters

  • inter: Health check interval in normal state, used when service status is stable.
  • fastinter: Check interval during state transition, uses shorter interval to accelerate confirmation when state change detected.
  • downinter: Check interval in DOWN state, uses this interval to probe recovery after service marked DOWN.
  • rise: Consecutive successes required to mark UP. After new primary comes online, must pass rise consecutive checks before receiving traffic.
  • fall: Consecutive failures required to mark DOWN. Service must fail fall consecutive times before being marked DOWN.

Key Constraint

Patroni core constraint: Ensures primary can complete demotion before TTL expires, preventing split-brain.

loop_wait+2×retry_timeoutttlloop\_wait + 2 \times retry\_timeout \leq ttl

Data Summary


Recommendations

fast mode is suitable for scenarios with extremely high RTO requirements, but requires sufficiently good network quality (latency < 1ms, very low packet loss). Recommended only for same-rack or same-switch deployments, and should be thoroughly tested in production before enabling.

norm mode (default) is Pigsty’s default configuration, sufficient for the vast majority of same-datacenter deployments. An average recovery time of 21 seconds is within acceptable range while providing a reasonable tolerance window to avoid false failovers from network jitter.

safe mode is suitable for same-city cross-datacenter deployments with higher network latency or occasional jitter. The longer tolerance window effectively prevents false failovers from network jitter, making it the recommended configuration for cross-datacenter disaster recovery.

wide mode is suitable for cross-region or even cross-continent deployments with high network latency and possible public-network-level packet loss. In such scenarios, stability is more important than recovery speed, so an extremely wide tolerance window ensures very low false failover rate.

ModeTarget RTOPassive RTOActive RTOScenario
fast3016 / 23 / 291 / 24 / 29Same switch, high-quality network
norm4527 / 34 / 412 / 35 / 41Default, same DC, standard network
safe9053 / 66 / 783 / 61 / 73Same-city active-active / cross-DC DR
wide150104 / 127 / 1504 / 122 / 145Geo-DR / cross-country
default32622 / 34 / 462 / 314 / 326Patroni default params

Typically you only need to set pg_rto to the mode name, and Pigsty will automatically configure Patroni and HAProxy parameters. For backward compatibility, Pigsty still supports configuring RTO directly in seconds, but the effect is equivalent to specifying norm mode.

The mode configuration actually loads the corresponding parameter set from pg_rto_plan. You can modify or override this configuration to implement custom RTO strategies.

pg_rto_plan:  # [ttl, loop, retry, start, margin, inter, fastinter, downinter, rise, fall]
  fast: [ 20  ,5  ,5  ,15 ,5  ,'1s' ,'0.5s' ,'1s' ,3 ,3 ]  # rto < 30s
  norm: [ 30  ,5  ,10 ,25 ,5  ,'2s' ,'1s'   ,'2s' ,3 ,3 ]  # rto < 45s
  safe: [ 60  ,10 ,20 ,45 ,10 ,'3s' ,'1.5s' ,'3s' ,3 ,3 ]  # rto < 90s
  wide: [ 120 ,20 ,30 ,95 ,15 ,'4s' ,'2s'   ,'4s' ,3 ,3 ]  # rto < 150s

3.4.4 - Service Access

Pigsty uses HAProxy to provide service access, with optional pgBouncer for connection pooling, and optional L2 VIP and DNS access.

Split read and write operations, route traffic correctly, and deliver PostgreSQL cluster capabilities reliably.

Service is an abstraction: it represents the form in which database clusters expose their capabilities externally, encapsulating underlying cluster details.

Services are crucial for stable access in production environments, showing their value during automatic failover in high availability clusters. Personal users typically don’t need to worry about this concept.


Personal Users

The concept of “service” is for production environments. Personal users with single-node clusters can skip the complexity and directly use instance names or IP addresses to access the database.

For example, Pigsty’s default single-node pg-meta.meta database can be connected directly using three different users:

psql postgres://dbuser_dba:[email protected]/meta     # Connect directly with DBA superuser
psql postgres://dbuser_meta:[email protected]/meta   # Connect with default business admin user
psql postgres://dbuser_view:DBUser.View@pg-meta/meta       # Connect with default read-only user via instance domain name

Service Overview

In real-world production environments, we use primary-replica database clusters based on replication. Within a cluster, one and only one instance serves as the leader (primary) that can accept writes. Other instances (replicas) continuously fetch change logs from the cluster leader to stay synchronized. Replicas can also handle read-only requests, significantly offloading the primary in read-heavy, write-light scenarios. Therefore, distinguishing write requests from read-only requests is a common practice.

Additionally, for production environments with high-frequency, short-lived connections, we pool requests through connection pool middleware (Pgbouncer) to reduce connection and backend process creation overhead. However, for scenarios like ETL and change execution, we need to bypass the connection pool and directly access the database. Meanwhile, high-availability clusters may undergo failover during failures, causing cluster leadership changes. Therefore, high-availability database solutions require write traffic to automatically adapt to cluster leadership changes. These varying access needs (read-write separation, pooled vs. direct connections, failover auto-adaptation) ultimately lead to the abstraction of the Service concept.

Typically, database clusters must provide this most basic service:

  • Read-write service (primary): Can read from and write to the database

For production database clusters, at least these two services should be provided:

  • Read-write service (primary): Write data: Can only be served by the primary.
  • Read-only service (replica): Read data: Can be served by replicas; falls back to primary when no replicas are available

Additionally, depending on specific business scenarios, there may be other services, such as:

  • Default direct service (default): Allows (admin) users to bypass the connection pool and directly access the database
  • Offline replica service (offline): Dedicated replica not serving online read traffic, used for ETL and analytical queries
  • Sync replica service (standby): Read-only service with no replication delay, handled by synchronous standby/primary for read queries
  • Delayed replica service (delayed): Access data from the same cluster as it was some time ago, handled by delayed replicas

Access Services

Pigsty’s service delivery boundary stops at the cluster’s HAProxy. Users can access these load balancers through various means.

The typical approach is to use DNS or VIP access, binding them to all or any number of load balancers in the cluster.

pigsty-access.jpg

You can use different host & port combinations, which provide PostgreSQL service in different ways.

Host

TypeSampleDescription
Cluster Domain Namepg-testAccess via cluster domain name (resolved by dnsmasq @ infra nodes)
Cluster VIP Address10.10.10.3Access via L2 VIP address managed by vip-manager, bound to primary node
Instance Hostnamepg-test-1Access via any instance hostname (resolved by dnsmasq @ infra nodes)
Instance IP Address10.10.10.11Access any instance’s IP address

Port

Pigsty uses different ports to distinguish pg services

PortServiceTypeDescription
5432postgresDatabaseDirect access to postgres server
6432pgbouncerMiddlewareAccess postgres through connection pool middleware
5433primaryServiceAccess primary pgbouncer (or postgres)
5434replicaServiceAccess replica pgbouncer (or postgres)
5436defaultServiceAccess primary postgres
5438offlineServiceAccess offline postgres

Combinations

# Access via cluster domain
postgres://test@pg-test:5432/test # DNS -> L2 VIP -> primary direct connection
postgres://test@pg-test:6432/test # DNS -> L2 VIP -> primary connection pool -> primary
postgres://test@pg-test:5433/test # DNS -> L2 VIP -> HAProxy -> primary connection pool -> primary
postgres://test@pg-test:5434/test # DNS -> L2 VIP -> HAProxy -> replica connection pool -> replica
postgres://dbuser_dba@pg-test:5436/test # DNS -> L2 VIP -> HAProxy -> primary direct connection (for admin)
postgres://dbuser_stats@pg-test:5438/test # DNS -> L2 VIP -> HAProxy -> offline direct connection (for ETL/personal queries)

# Access via cluster VIP directly
postgres://[email protected]:5432/test # L2 VIP -> primary direct access
postgres://[email protected]:6432/test # L2 VIP -> primary connection pool -> primary
postgres://[email protected]:5433/test # L2 VIP -> HAProxy -> primary connection pool -> primary
postgres://[email protected]:5434/test # L2 VIP -> HAProxy -> replica connection pool -> replica
postgres://[email protected]:5436/test # L2 VIP -> HAProxy -> primary direct connection (for admin)
postgres://[email protected]::5438/test # L2 VIP -> HAProxy -> offline direct connection (for ETL/personal queries)

# Directly specify any cluster instance name
postgres://test@pg-test-1:5432/test # DNS -> database instance direct connection (singleton access)
postgres://test@pg-test-1:6432/test # DNS -> connection pool -> database
postgres://test@pg-test-1:5433/test # DNS -> HAProxy -> connection pool -> database read/write
postgres://test@pg-test-1:5434/test # DNS -> HAProxy -> connection pool -> database read-only
postgres://dbuser_dba@pg-test-1:5436/test # DNS -> HAProxy -> database direct connection
postgres://dbuser_stats@pg-test-1:5438/test # DNS -> HAProxy -> database offline read/write

# Directly specify any cluster instance IP access
postgres://[email protected]:5432/test # Database instance direct connection (directly specify instance, no automatic traffic distribution)
postgres://[email protected]:6432/test # Connection pool -> database
postgres://[email protected]:5433/test # HAProxy -> connection pool -> database read/write
postgres://[email protected]:5434/test # HAProxy -> connection pool -> database read-only
postgres://[email protected]:5436/test # HAProxy -> database direct connection
postgres://[email protected]:5438/test # HAProxy -> database offline read-write

# Smart client: read/write separation via URL
postgres://[email protected]:6432,10.10.10.12:6432,10.10.10.13:6432/test?target_session_attrs=primary
postgres://[email protected]:6432,10.10.10.12:6432,10.10.10.13:6432/test?target_session_attrs=prefer-standby

3.5 - Point-in-Time Recovery

Pigsty uses pgBackRest to implement PostgreSQL point-in-time recovery, allowing users to roll back to any point in time within the backup policy window.

When you accidentally delete data, tables, or even the entire database, PITR lets you return to any point in time and avoid data loss from software defects and human error.

— This “magic” once reserved for senior DBAs is now available out of the box to everyone.


Overview

Pigsty’s PostgreSQL clusters come with auto-configured Point-in-Time Recovery (PITR) capability, powered by the backup component pgBackRest and optional object storage repository MinIO.

High availability solutions can address hardware failures but are powerless against data deletion/overwriting/database drops caused by software defects and human errors. For such situations, Pigsty provides out-of-the-box Point-in-Time Recovery (PITR) capability, enabled by default without additional configuration.

Pigsty provides default configurations for base backups and WAL archiving. You can use local directories and disks, or dedicated MinIO clusters or S3 object storage services to store backups and achieve geo-redundant disaster recovery. When using local disks, the default capability to recover to any point within the past day is retained. When using MinIO or S3, the default capability to recover to any point within the past week is retained. As long as storage space permits, you can retain any arbitrarily long recoverable time window, as your budget allows.


What Problems Does PITR Solve?

  • Enhanced disaster recovery: RPO drops from ∞ to tens of MB, RTO drops from ∞ to hours/minutes.
  • Ensures data security: Data integrity in C/I/A: avoids data consistency issues caused by accidental deletion.
  • Ensures data security: Data availability in C/I/A: provides fallback for “permanently unavailable” disaster scenarios
Standalone Configuration StrategyEventRTORPO
NothingCrash Permanently lost All lost
Base BackupCrash Depends on backup size and bandwidth (hours) Lose data since last backup (hours to days)
Base Backup + WAL ArchiveCrash Depends on backup size and bandwidth (hours) Lose unarchived data (tens of MB)

What Are the Costs of PITR?

  • Reduces C in data security: Confidentiality, creates additional leak points, requires additional backup protection.
  • Extra resource consumption: Local storage or network traffic/bandwidth overhead, usually not a concern.
  • Increased complexity: Users need to pay backup management costs.

Limitations of PITR

If only PITR is used for failure recovery, RTO and RPO metrics are inferior compared to high availability solutions, and typically both should be used together.

  • RTO: With only standalone + PITR, recovery time depends on backup size and network/disk bandwidth, ranging from tens of minutes to hours or days.
  • RPO: With only standalone + PITR, some data may be lost during crashes - one or several WAL segment files may not yet be archived, losing 16 MB to tens of MB of data.

Besides PITR, you can also use delayed clusters in Pigsty to address data deletion/modification caused by human errors or software defects.


How It Works

Point-in-time recovery allows you to restore and roll back your cluster to “any point” in the past, avoiding data loss caused by software defects and human errors. To achieve this, two preparations are needed: Base Backup and WAL Archiving. Having a base backup allows users to restore the database to its state at backup time, while having WAL archives starting from a base backup allows users to restore the database to any point after the base backup time.

For detailed mechanisms, see Base Backup and Point-in-Time Recovery; for specific operations, refer to PGSQL Admin: Backup and Recovery.

Base Backup

Pigsty uses pgBackRest to manage PostgreSQL backups. pgBackRest initializes empty repositories on all cluster instances but only actually uses the repository on the cluster primary.

pgBackRest supports three backup modes: full backup, incremental backup, and differential backup, with the first two being most commonly used. Full backup takes a complete physical snapshot of the database cluster at the current moment; incremental backup records the differences between the current database cluster and the previous full backup.

Pigsty provides a wrapper command for backups: /pg/bin/pg-backup [full|incr]. You can schedule regular base backups as needed through Crontab or any other task scheduling system.

WAL Archiving

Pigsty enables WAL archiving on the cluster primary by default and uses the pgbackrest command-line tool to continuously push WAL segment files to the backup repository.

pgBackRest automatically manages required WAL files and timely cleans up expired backups and their corresponding WAL archive files based on the backup retention policy.

If you don’t need PITR functionality, you can disable WAL archiving by configuring the cluster: archive_mode: off and remove node_crontab to stop scheduled backup tasks.


Implementation

By default, Pigsty provides two preset backup strategies: The default uses local filesystem backup repository, performing one full backup daily to ensure users can roll back to any point within the past day. The alternative strategy uses dedicated MinIO clusters or S3 storage for backups, with weekly full backups, daily incremental backups, and two weeks of backup and WAL archive retention by default.

Pigsty uses pgBackRest to manage backups, receive WAL archives, and perform PITR. Backup repositories can be flexibly configured (pgbackrest_repo): defaults to primary’s local filesystem (local), but can also use other disk paths, or the included optional MinIO service (minio) and cloud S3 services.

pgbackrest_enabled: true          # enable pgBackRest on pgsql host?
pgbackrest_clean: true            # remove pg backup data during init?
pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
pgbackrest_method: local          # pgbackrest repo method: local, minio, [user-defined...]
pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  local:                          # default pgbackrest repo with local posix fs
    path: /pg/backup              # local backup directory, `/pg/backup` by default
    retention_full_type: count    # retention full backup by count
    retention_full: 2             # keep at most 3 full backup, at least 2, when using local fs repo
  minio:                          # optional minio repo for pgbackrest
    type: s3                      # minio is s3-compatible, so use s3
    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    s3_region: us-east-1          # minio region, us-east-1 by default, not used for minio
    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    s3_key: pgbackrest            # minio user access key for pgbackrest
    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    s3_uri_style: path            # use path style uri for minio rather than host style
    path: /pgbackrest             # minio backup path, `/pgbackrest` by default
    storage_port: 9000            # minio port, 9000 by default
    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    bundle: y                     # bundle small files into a single file
    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    retention_full_type: time     # retention full backup by time on minio repo
    retention_full: 14            # keep full backup for last 14 days
  # You can also add other optional backup repos, such as S3, for geo-redundant disaster recovery

Pigsty parameter pgbackrest_repo target repositories are converted to repository definitions in the /etc/pgbackrest/pgbackrest.conf configuration file. For example, if you define a US West S3 repository for storing cold backups, you can use the following reference configuration.

s3:    # ------> /etc/pgbackrest/pgbackrest.conf
  repo1-type: s3                                   # ----> repo1-type=s3
  repo1-s3-region: us-west-1                       # ----> repo1-s3-region=us-west-1
  repo1-s3-endpoint: s3-us-west-1.amazonaws.com    # ----> repo1-s3-endpoint=s3-us-west-1.amazonaws.com
  repo1-s3-key: '<your_access_key>'                # ----> repo1-s3-key=<your_access_key>
  repo1-s3-key-secret: '<your_secret_key>'         # ----> repo1-s3-key-secret=<your_secret_key>
  repo1-s3-bucket: pgsql                           # ----> repo1-s3-bucket=pgsql
  repo1-s3-uri-style: host                         # ----> repo1-s3-uri-style=host
  repo1-path: /pgbackrest                          # ----> repo1-path=/pgbackrest
  repo1-bundle: y                                  # ----> repo1-bundle=y
  repo1-cipher-type: aes-256-cbc                   # ----> repo1-cipher-type=aes-256-cbc
  repo1-cipher-pass: pgBackRest                    # ----> repo1-cipher-pass=pgBackRest
  repo1-retention-full-type: time                  # ----> repo1-retention-full-type=time
  repo1-retention-full: 90                         # ----> repo1-retention-full=90

Recovery

You can directly use the following wrapper commands for PostgreSQL database cluster point-in-time recovery.

Pigsty uses incremental differential parallel recovery by default, allowing you to recover to a specified point in time at maximum speed.

pg-pitr                                 # Restore to the end of WAL archive stream (e.g., for entire datacenter failure)
pg-pitr -i                              # Restore to the most recent backup completion time (rarely used)
pg-pitr --time="2022-12-30 14:44:44+08" # Restore to a specified point in time (for database or table drops)
pg-pitr --name="my-restore-point"       # Restore to a named restore point created with pg_create_restore_point
pg-pitr --lsn="0/7C82CB8" -X            # Restore to immediately before the LSN
pg-pitr --xid="1234567" -X -P           # Restore to immediately before the specified transaction ID, then promote cluster to primary
pg-pitr --backup=latest                 # Restore to the latest backup set
pg-pitr --backup=20221108-105325        # Restore to a specific backup set, backup sets can be listed with pgbackrest info

pg-pitr                                 # pgbackrest --stanza=pg-meta restore
pg-pitr -i                              # pgbackrest --stanza=pg-meta --type=immediate restore
pg-pitr -t "2022-12-30 14:44:44+08"     # pgbackrest --stanza=pg-meta --type=time --target="2022-12-30 14:44:44+08" restore
pg-pitr -n "my-restore-point"           # pgbackrest --stanza=pg-meta --type=name --target=my-restore-point restore
pg-pitr -b 20221108-105325F             # pgbackrest --stanza=pg-meta --type=name --set=20221230-120101F restore
pg-pitr -l "0/7C82CB8" -X               # pgbackrest --stanza=pg-meta --type=lsn --target="0/7C82CB8" --target-exclusive restore
pg-pitr -x 1234567 -X -P                # pgbackrest --stanza=pg-meta --type=xid --target="0/7C82CB8" --target-exclusive --target-action=promote restore

When performing PITR, you can use Pigsty’s monitoring system to observe the cluster LSN position status and determine whether recovery to the specified point in time, transaction point, LSN position, or other point was successful.

pitr

3.5.1 - How PITR Works

PITR mechanism: base backup, WAL archive, recovery window, and transaction boundaries

The core principle of PITR is: base backup + WAL archiving = recover to any point in time. In Pigsty, this is implemented by pgBackRest, running scheduled backups + WAL archiving automatically.


Three Elements

ElementPurposePigsty Implementation
Base BackupProvides a consistent physical snapshot, recovery starting pointpg-backup + pgbackrest + pg_crontab
WAL ArchivingRecords all changes after backup, defines recovery patharchive_mode=on + archive_command=pgbackrest ... archive-push
Recovery TargetSpecifies where to stop recoverypg_pitr params / pg-pitr script / pgbackrest restore

Base Backup

Base backup is a physical snapshot at a point in time, the starting point of PITR. Pigsty uses pgBackRest and provides pg-backup wrapper for common ops.

Backup Types

TypeDescriptionRestore Cost
FullCopies all data filesFastest restore, largest space
DifferentialChanges since latest fullRestore needs full + diff
IncrementalChanges since latest any backupSmallest space, restore needs full chain

Pigsty Defaults

  • pg-backup defaults to incremental, and auto-runs a full if none exists.
  • Backup jobs are configured via pg_crontab and written to postgres crontab.
  • Script detects role; only primary runs, replicas exit.

Higher backup frequency means less WAL to replay and faster recovery. See Backup Mechanism and Backup Policy.


WAL Archiving

WAL (Write-Ahead Log) records every database change. PITR relies on continuous WAL archiving to replay to the target time.

Pigsty Archiving Pipeline

Pigsty enables WAL archiving by default, using pgBackRest:

  • archive_mode = on
  • archive_command = pgbackrest --stanza=<cluster> archive-push %p

pgBackRest continuously receives WAL segments and cleans expired archives per retention policy. During recovery, pgBackRest uses archive-get to pull needed WAL.

Key Impacts

  • Archive delay shortens the right boundary of recovery window.
  • Repo unavailability interrupts archiving, directly impacting PITR.

See Backup Mechanism and Backup Repository.


Recovery Targets and Transaction Boundaries

PITR targets are defined by PostgreSQL recovery_target_* parameters, wrapped by pg_pitr / pg-pitr in Pigsty.

Target Types

TargetParamDescriptionTypical Scenario
latestN/ARecover to end of WAL streamDisaster, latest restore
timetimeRecover to specific timestampAccidental deletion
xidxidRecover to specific transaction IDBad transaction rollback
lsnlsnRecover to specific LSNPrecise rollback
namenameRecover to named restore pointPlanned checkpoint
immediatetype: immediateStop at first consistent pointFastest restore

Inclusive vs Exclusive

Recovery targets are inclusive by default. To roll back before the target, set exclusive: true in pg_pitr, mapping to recovery_target_inclusive = false.

Transaction Boundaries

PITR keeps committed transactions before the target, and rolls back uncommitted ones.

gantt
    title Transaction Boundaries and Recovery Target
    dateFormat X
    axisFormat %s
    section Transaction A
    BEGIN → COMMIT (committed) :done, a1, 0, 2
    section Transaction B
    BEGIN → uncommitted :active, b1, 1, 4
    section Recovery
    Recovery target :milestone, m1, 2, 0

See Restore Operations.


Recovery Window

The recovery window is defined by two boundaries:

  • Left boundary: earliest available base backup
  • Right boundary: latest archived WAL

pitr-scope

Window length depends on backup frequency, backup retention, and WAL retention:

  • local repo keeps 2 full backups by default, window is 24–48 hours.
  • minio repo keeps 14 days by time, window is 1–2 weeks.

See Backup Policy and Backup Repository.


Timeline

Timeline distinguishes historical branches. New timelines are created by:

  1. PITR restore
  2. Replica promote
  3. Failover
gitGraph
    commit id: "Initial"
    commit id: "Write data"
    commit id: "More writes"
    branch Timeline-2
    checkout Timeline-2
    commit id: "PITR point 1"
    commit id: "New writes"
    branch Timeline-3
    checkout Timeline-3
    commit id: "PITR point 2"
    commit id: "Continue"
    checkout main
    commit id: "Original continues"

When multiple timelines exist, you can specify timeline; Pigsty defaults to latest. See Restore Operations.

3.5.2 - PITR Architecture

Pigsty PITR architecture: pgBackRest, repositories, and execution flow

Pigsty uses pgBackRest as the PostgreSQL backup and recovery engine, providing out-of-the-box Point-in-Time Recovery (PITR).

This page explains the architecture: who runs backups, where data flows, how repositories are organized, and how continuity is kept after failover.


Overview

PITR architecture has three main pipelines: backup execution, WAL archiving, restore execution.

PipelineEntryEngineDestination
Backuppg-backup + pg_crontabpgbackrest backuprepo backup/
WAL ArchivePostgreSQL archive_commandpgbackrest archive-pushrepo archive/
Restorepg_pitr / pg-pitr / pgsql-pitr.ymlpgbackrest restoretarget data directory

See Backup Mechanism and Restore Operations for details.


Components and Responsibilities

ComponentRoleDescription
PostgreSQLData sourceGenerates data files and WAL archive stream
pgBackRestBackup engineRuns backups, receives WAL, performs restore
pg-backupBackup entryPigsty wrapper for pgbackrest backup
pg_pitr / pg-pitrRestore entryPigsty params/script for pgbackrest restore
Backup repositoryStorage backendStores backup/ and archive/, supports local / minio / s3
pgbackrest_exporterMetrics outputExports backup status metrics, default port 9854

Data Flow

flowchart TB
    subgraph cluster["PostgreSQL Cluster"]
        direction TB
        primary["Primary<br/>PostgreSQL"]
        pb["pgBackRest"]
        cron["pg-backup / pg_crontab"]
    end
    repo["Backup Repo<br/>local / minio / s3"]
    restore["Restore Target Data Dir"]

    cron --> pb
    primary -->|base backup| pb
    primary -->|WAL archive| pb
    pb -->|backup/archive| repo
    repo -->|restore/archive-get| pb
    pb -->|restore| restore

Key points:

  • Backup is triggered by pg-backup, executing pgbackrest backup to write base backups.
  • Archiving is triggered by PostgreSQL archive_command, pushing WAL segments to repo.
  • Restore reads backup and WAL from repo, rebuilding data dir via pgbackrest restore.

Deployment and Roles

pgBackRest is installed on all PostgreSQL nodes, but only the primary executes backups:

  • pg-backup detects node role; replicas exit directly.
  • After failover, the new primary takes over backup/archiving automatically.

This decouples backup pipeline from HA topology and avoids interruptions on switchover.


Repository and Isolation

Stanza (Cluster Identity)

pgBackRest uses stanza to isolate cluster backups, mapped to pg_cluster in Pigsty:

backup-repo
├── pg-meta/
│   ├── backup/
│   └── archive/
└── pg-test/
    ├── backup/
    └── archive/

Repository Types

Pigsty selects repo type via pgbackrest_method and config via pgbackrest_repo:

TypeCharacteristicsUse Cases
localLocal disk, fastest restoreDev/test, single node
minioObject storage, centralizedProduction, DR
s3Cloud object storageCloud, cross-region DR

Production should use remote repo (MinIO/S3) to avoid data and backups lost together on host failure. See Backup Repository.


Config Mapping

Pigsty renders pgbackrest_repo into /etc/pgbackrest/pgbackrest.conf. Backup logs are under /pg/log/pgbackrest/, restore generates temporary config and logs.

See Backup Mechanism for details.


Observability

pgbackrest_exporter exports backup status metrics (last backup time, type, size, etc), enabled by default on port 9854. You can control it with pgbackrest_exporter_enabled.


3.5.3 - PITR Tradeoffs

PITR strategy tradeoffs: repository choice, space planning, and recommendations

When designing a PITR strategy, the core tradeoffs are: backup repository location, recovery window length, and restore speed vs storage cost.

This page helps you make practical choices across these dimensions.


Local vs Remote

Repository location is the first decision in PITR strategy.

Local Repository

Store backups on primary local disk (pgbackrest_method = local):

Pros

  • Simple, out-of-the-box
  • Fast restore (local I/O)
  • No external dependency

Cons

  • No geo-DR; backups may be lost with host
  • Limited by local disk capacity
  • Same failure domain as production data

Remote Repository

Store backups on MinIO / S3 (pgbackrest_method = minio|s3):

Pros

  • Geo-DR, backups independent from DB host
  • Near-unlimited capacity, shared by multiple clusters
  • Works with encryption, versioning, and other safety controls

Cons

  • Restore speed depends on network bandwidth
  • Depends on object storage availability
  • Higher deployment and ops cost

How to Choose

ScenarioRecommended RepoReason
Dev/TestlocalSimple and sufficient
Single-node prodminio / s3Recover even if host fails
Cluster prodlocal + minioBalance speed and DR
Critical businessmultiple remote reposMulti-site DR, maximum protection

See Backup Repository for details.


Space vs Window

Longer recovery window means more storage. Window length is defined by backup retention + WAL retention.

Factors

FactorImpact
Database sizeBaseline for full backup size
Change rateAffects incremental backups and WAL size
Backup frequencyHigher frequency = faster restore but more storage
RetentionLonger retention = longer window, more storage

Intuitive Examples

Assume DB is 100GB, daily change 10GB:

Daily full backups (keep 2)

pitr-space

  • Full backups: 100GB × 2 ≈ 200GB
  • WAL archive: 10GB × 2 ≈ 20GB
  • Total: ~2–3x DB size

Weekly full + daily incremental (keep 14 days)

pitr-space2

  • Full backups: 100GB × 2 ≈ 200GB
  • Incremental: ~10GB × 12 ≈ 120GB
  • WAL archive: 10GB × 14 ≈ 140GB
  • Total: ~4–5x DB size

Space vs window is a hard constraint: you cannot get a longer window with less storage.


Strategy Choices

Daily Full Backup

Simplest and most reliable, also the default for local repo:

  • Full backup once per day
  • Keep 2 full backups
  • Recovery window about 24–48 hours

Suitable when:

  • DB size is small to medium (< 500GB)
  • Backup window is sufficient
  • Storage cost is not a concern

Full + Incremental

Space-optimized strategy, for large DBs or longer windows:

  • Weekly full backup
  • Incremental on other days
  • Keep 14 days

Suitable when:

  • Large DB size
  • Using object storage
  • Need 1–2 week recovery window
flowchart TD
    A{"DB size<br/>< 100GB?"} -->|Yes| B["Daily full backup"]
    A -->|No| C{"DB size<br/>< 500GB?"}
    C -->|No| D["Full + incremental"]
    C -->|Yes| E{"Backup window<br/>sufficient?"}
    E -->|Yes| F["Daily full backup"]
    E -->|No| G["Full + incremental"]

Dev/Test

pg_crontab:
  - '00 01 * * * /pg/bin/pg-backup full'
pgbackrest_method: local
  • Window: 24–48 hours
  • Characteristics: simplest and lowest cost

Production Clusters

pg_crontab:
  - '00 01 * * 1 /pg/bin/pg-backup full'
  - '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'
pgbackrest_method: minio
  • Window: 7–14 days
  • Characteristics: remote DR, production-ready

Critical Business

Dual-repo strategy (local + remote):

pgbackrest_method: local
pgbackrest_repo:
  local: { path: /pg/backup, retention_full: 2 }
  minio: { type: s3, retention_full_type: time, retention_full: 14 }
  • Local repo for fast restore
  • Remote repo for DR

See Backup Policy and Backup Repository for details.

3.5.4 - PITR Scenarios

Typical PITR scenarios: data deletion, DDL drops, batch errors, branch restore, and site disasters

The value of PITR is not just “rolling back a database”, but turning irreversible human/software mistakes into recoverable problems. It covers cases from “drop one table” to “entire site down”, addressing logical errors and disaster recovery.


Overview

PITR addresses these scenarios:

Scenario TypeTypical ProblemRecommended StrategyRecovery Target
Accidental DMLDELETE/UPDATE without WHERE, script mistakeBranch restore firsttime / xid
DDL dropsDROP TABLE/DATABASE, bad migrationBranch restoretime / name
Batch errors / bad releaseBuggy release pollutes dataBranch restore + verifytime / xid
Audit / investigationNeed to inspect historical stateBranch restore (read-only)time / lsn
Site disaster / total lossHardware failure, ransomware, power outageIn-place or rebuildlatest / time

A Simple Rule of Thumb

  • If writes already caused business errors, consider PITR.
  • Need online verification or partial recovery → branch restore.
  • Need service restored ASAP → in-place restore (accept downtime).
flowchart TD
    A["Issue discovered"] --> B{"Downtime allowed?"}
    B -->|Yes| C["In-place restore<br/>shortest path"]
    B -->|No| D["Branch restore<br/>verify then switch"]
    C --> E["Rebuild backups after restore"]
    D --> F["Verify / export / cut traffic"]

Scenario Details

Accidental DML (Delete/Update)

Typical issues:

  • DELETE without WHERE
  • Bad UPDATE overwrites key fields
  • Batch script bugs spread bad data

Approach:

  1. Stop the bleeding: pause related apps or writes.
  2. Locate time point: use logs/metrics/business feedback.
  3. Choose strategy:
    • Downtime allowed: in-place restore before error
    • No downtime: branch restore, export correct data back

Recommended targets:

  • Known transaction: xid + exclusive: true
  • Time-based only: time + exclusive: true
pg_pitr: { xid: "250000", exclusive: true }
# or
pg_pitr: { time: "2025-01-15 14:30:00+08", exclusive: true }

DDL Drops (Table/DB)

Typical issues:

  • DROP TABLE / DROP DATABASE
  • Wrong migration scripts
  • Cleanup scripts deleted production objects

Why branch restore:

DDL is irreversible; in-place restore rolls back the whole cluster. Branch restore lets you export only the dropped objects back, minimizing impact.

Recommended flow:

  1. Create branch cluster and PITR to before drop
  2. Validate schema/data
  3. pg_dump target objects
  4. Import back to production
sequenceDiagram
    participant O as Original Cluster
    participant B as Branch Cluster
    O->>B: Create branch cluster
    Note over B: PITR to before drop
    B->>O: Dump and import objects
    Note over B: Destroy branch after verification

Batch Errors / Bad Releases

Typical issues:

  • Release writes incorrect data
  • ETL/batch jobs pollute large datasets
  • Fix scripts fail or scope unclear

Principles:

  • Prefer branch restore: verify before cutover
  • Compare data diff between original and branch

Suggested flow:

  1. Determine error window
  2. Branch restore to before error
  3. Validate key tables
  4. Export partial data or cut traffic

This scenario often needs business review, so branch restore is safer and controllable.


Audit / Investigation

Typical issues:

  • Need to inspect historical data state
  • Compare “correct history” with current data

Recommended: branch restore (read-only)

Benefits:

  • No production impact
  • Try multiple time points
  • Fits audit, verification, forensics
pg_pitr: { time: "2025-01-15 10:00:00+08" }  # create read-only branch

Site Disaster / Total Loss

This is the ultimate PITR fallback. When HA cannot help (primary + replicas down, power outage, ransomware), PITR is the last line of defense.

Key prerequisite:

Remote repo (MinIO/S3) is required.

Local repo is lost together with the host, so recovery is impossible.

Recovery flow:

  1. Prepare new hosts or new site
  2. Restore cluster config and point to remote repo
  3. Run PITR restore (usually latest)
  4. Validate data and restore service
./pgsql-pitr.yml -l pg-meta   # restore to end of WAL archive

In-place vs Branch Restore

DimensionIn-place RestoreBranch Restore
DowntimeRequiredNot required
RiskHigh (directly impacts prod)Low (verify before action)
ComplexityLowMedium (new cluster + export)
RecommendedDisaster recovery, fast restoreMis-ops, audit, complex cases

For most production scenarios, branch restore is the default recommendation. Only choose in-place restore when service must be restored ASAP.


3.6 - Monitoring System

How Pigsty’s monitoring system is architected and how monitored targets are automatically managed.

3.7 - Security and Compliance

Authentication, authorization, encryption, audit, and compliance baseline for database and infrastructure security.

Pigsty’s security goals are the CIA triad:

  • Confidentiality: prevent unauthorized access and leakage
  • Integrity: prevent tampering or silent corruption
  • Availability: prevent outages from failures

Pigsty’s security philosophy:

  • Secure by default: out-of-the-box baseline with minimal config and broad coverage.
  • Defense in depth: layered protections so one breach does not collapse the system.
  • Least privilege: roles and privileges enforce least-privilege by default.
  • Compliance-ready: security capabilities plus process can meet audits.

Default Security Baseline (What Problems It Solves)

Security OptionDefaultProblems Solved
Password encryptionpg_pwd_enc: scram-sha-256Prevent weak hashes and plaintext leakage
Data checksumspg_checksum: trueDetect silent data corruption
HBA layeringAdmin from internet must use sslPrevent plaintext access from the public network
Local CAca_create: trueUnified certificate trust chain
Backup & recoverypgbackrest_enabled: truePrevent data loss from mistakes
Nginx HTTPSnginx_sslmode: enablePrevent plaintext web ingress
MinIO HTTPSminio_https: truePrevent backup traffic snooping
OS baselineSELinux permissiveBaseline for enforcing mode

Defaults prioritize usability and scalability. Production should be hardened to meet compliance needs.


Hardening Roadmap

Pigsty provides the security hardening template conf/ha/safe.yml, which upgrades the baseline to a higher security level:

  • Enforce SSL and certificate auth
  • Password strength and expiration policies
  • Connection and disconnection logs
  • Firewall and SELinux hardening

This Chapter

SectionDescriptionCore Question
Defense in DepthSeven-layer security model and baselineHow does the security system land end to end?
AuthenticationHBA rules, password policy, certificate authHow do we verify identities?
Access ControlRole system, permission model, database isolationHow do we control privileges?
Encrypted CommunicationTLS, local CA, certificate managementHow do we protect transport and certs?
Data SecurityChecksums, backup, encryption, recoveryHow do we keep data intact and recoverable?
Compliance ChecklistMLPS Level 3 and SOC2 mappingHow do we meet compliance requirements?

3.7.1 - Seven-Layer Security Model

Pigsty defense-in-depth model with layered security baselines from physical to user.

Security is not a wall, but a city. Pigsty adopts a defense-in-depth strategy and builds multiple protections across seven layers. Even if one layer is breached, other layers still protect the system.

This layered approach addresses three core risks:

  • Perimeter breach: reduce the chance that one breach compromises everything.
  • Internal abuse: even if an internal account is compromised, least privilege limits damage.
  • Unpredictable failures: hardware, software, and human errors all get multi-layer fallbacks.

Overview


L1 Physical and Media Security

When the physical layer falls, the only defense is the data itself.

Problems solved

  • Silent data corruption from hardware faults
  • Data leakage from stolen media

Pigsty support

  • Data checksums: default pg_checksum: true, detects corruption from bad blocks/memory errors.
  • Optional transparent encryption: pg_tde and similar extensions encrypt data at rest.

L2 Network Security

Control who can reach services to reduce attack surface.

Problems solved

  • Unauthorized network access
  • Plaintext traffic sniffing/tampering

Pigsty support

  • Firewall zones: node_firewall_mode can enable zone, trust intranet, restrict public.
  • Listen hardening: pg_listen limits bind addresses to avoid full exposure.
  • TLS: HBA supports ssl/cert for encryption and identity checks.

L3 Perimeter Security

A unified ingress is the basis for audit, control, and blocking.

Problems solved

  • Multiple entry points are hard to manage
  • External systems lack a unified hardening point

Pigsty support

  • HAProxy ingress: unified DB traffic entry for blocking/limiting/failover.
  • Nginx gateway: unified HTTPS ingress for infrastructure services (nginx_sslmode).
  • Centralized credentials: HAProxy and Grafana admin passwords are declared in config.

L4 Host Security

The foundation of DB security: least privilege, isolation, and hardening.

Problems solved

  • Host compromise leads to total loss
  • Admin privileges spread too widely

Pigsty support

  • SELinux mode: node_selinux_mode can switch to enforcing.
  • Least-privilege admin: node_admin_sudo supports limit to restrict sudo commands.
  • Sensitive file permissions: CA private key directory 0700, private key files 0600.

L5 Application Security

Authentication is the first gate for DB security.

Problems solved

  • Weak passwords or plaintext auth leak accounts
  • Misconfigured rules allow privilege escalation

Pigsty support

  • HBA layering: rules by source and role; internet admin must use ssl.
  • SCRAM password hash: pg_pwd_enc: scram-sha-256 by default.
  • Password strength checks: passwordcheck/credcheck optional.
  • Certificate auth: auth: cert supports client certs.

L6 Data Security

Ensure data is available, recoverable, and accountable.

Problems solved

  • Human errors and logic mistakes
  • Data tampering or deletion after intrusion

Pigsty support

  • pgBackRest backup: enabled by default, local and MinIO repos.
  • Backup encryption: MinIO supports AES-256-CBC (cipher_type).
  • PITR recovery: restore to any time point with WAL archive.
  • Audit logs: templates enable DDL/connection/slow query logs, optional pgaudit.

L7 User Security

Least privilege is not a slogan, it is default behavior.

Problems solved

  • Business accounts have excessive permissions
  • Databases can “pierce” each other

Pigsty support

  • Four-tier RBAC: dbrole_readonly/readwrite/admin/offline.
  • Default privileges: objects automatically get correct grants.
  • Database isolation: revokeconn: true blocks cross-DB access.
  • Public privilege tightening: revoke CREATE on public schema.

Security Hardening Path

Pigsty provides a security hardening template: conf/ha/safe.yml. It bundles common hardening items:

  • Enforce SSL and certificate auth
  • Password strength and expiration policies
  • Connection and disconnection logs
  • Firewall and SELinux hardening

This path is a quick upgrade from default to compliance-ready.


Next

3.7.2 - Authentication

HBA rules, password policy, and certificate auth - who can connect and how to prove identity.

Authentication answers three core questions:

  • Who you are: is the identity unique and recognizable?
  • How you prove it: are passwords/certs strong enough?
  • Where you come from: is the source controlled?

Pigsty uses HBA rules + password/certificates for authentication, with SCRAM as the default password hash.


Authentication Flow

flowchart LR
  C[Client] --> HBA[HBA Rules]
  HBA --> A1[Password SCRAM]
  HBA --> A2[Certificate Auth]
  HBA --> A3[Local ident/peer]
  A1 --> RBAC[Roles and Privileges]
  A2 --> RBAC
  A3 --> RBAC

HBA decides “who can come from where”, and the auth method decides “how identity is proven”.


HBA Layering Model

Pigsty default HBA rules are layered:

  • Local uses ident/peer, the safest.
  • Intranet uses scram password auth.
  • Internet admin must use ssl.

This solves “same user, different auth strength by source”.

Key capabilities of HBA rules

  • Order first: supports order sorting, smaller number means higher priority.
  • Address aliases: local / localhost / intra / world, etc.
  • Role conditions: primary/replica/offline for fine-grained control.

Password Authentication

Default password hash:

pg_pwd_enc: scram-sha-256

Problems solved

  • Plaintext password storage risk
  • Weak hashes cracked offline

Compatibility

For legacy clients you can use md5, but security is reduced.


Password Strength and Rotation

Pigsty can enable password strength checking extensions:

pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
pg_extensions: [ passwordcheck, credcheck ]

Use expire_in to control account expiry:

pg_users:
  - { name: dbuser_app, password: 'StrongPwd', expire_in: 365 }

Problems solved

  • Weak or reused passwords
  • Long-lived accounts without rotation

Certificate Authentication

Certificates mitigate the risk of “password phishing or copying”.

  • HBA auth: cert requires client certs.
  • Certificate CN usually matches the database username.
  • Pigsty ships cert.yml to issue client certificates.

PgBouncer Authentication

PgBouncer uses separate HBA rules and TLS settings:

pgbouncer_sslmode: disable   # default off, set to require/verify-full
pgb_default_hba_rules: [...] # separate rules

This solves the problem of “pool entry and database entry being out of sync”.


Default Accounts and Risks

UserDefault PasswordRisk
dbuser_dbaDBUser.DBAadmin account default password
dbuser_monitorDBUser.Monitormonitor account can be abused
replicatorDBUser.Replicatorreplication account abuse can leak data

Default passwords must be changed in production.


Security Recommendations

  • Use ssl/cert on all public entry points.
  • Use scram for intranet users, avoid md5.
  • Enable passwordcheck to enforce complexity.
  • Rotate passwords regularly (expire_in).

Next

3.7.3 - Access Control

Pigsty provides an out-of-the-box role and privilege model that enforces least privilege.

Access control answers two core questions:

  • What you can do: boundaries for read/write/DDL
  • What data you can access: isolation across databases and schemas

Pigsty enforces least privilege with RBAC roles + default privileges.


Four-Tier Role Model

flowchart TB
    subgraph Admin["dbrole_admin (Admin)"]
        A1["Can run DDL / CREATE / ALTER"]
        A2["Inherits dbrole_readwrite"]
    end
    subgraph RW["dbrole_readwrite (Read-Write)"]
        RW1["Can INSERT/UPDATE/DELETE"]
        RW2["Inherits dbrole_readonly"]
    end
    subgraph RO["dbrole_readonly (Read-Only)"]
        RO1["Can SELECT all tables"]
    end
    subgraph Offline["dbrole_offline (Offline)"]
        OFF1["Only for offline instances"]
    end

    Admin --> RW --> RO

Problems solved

  • Production accounts have excessive permissions
  • DDL and DML are not separated, increasing risk

Default Roles and System Users

Pigsty provides four roles and four system users (from default source values):

Role/UserAttributesInherits/RolesDescription
dbrole_readonlyNOLOGIN-global read-only access
dbrole_offlineNOLOGIN-restricted read-only (offline instances)
dbrole_readwriteNOLOGINdbrole_readonlyglobal read-write access
dbrole_adminNOLOGINpg_monitor, dbrole_readwriteadmin / object creation
postgresSUPERUSER-system superuser
replicatorREPLICATIONpg_monitor, dbrole_readonlyreplication user
dbuser_dbaSUPERUSERdbrole_adminadmin user
dbuser_monitor-pg_monitor, dbrole_readonlymonitor user

This default role set covers most use cases.


Default Privilege Policy

Pigsty writes default privileges (pg_default_privileges) during initialization so new objects automatically get reasonable permissions.

Problems solved

  • New objects lack grants and apps fail
  • Accidental grants to PUBLIC expose the whole DB

Approach

  • Read-only role: SELECT/EXECUTE
  • Read-write role: INSERT/UPDATE/DELETE
  • Admin role: DDL privileges

Object Ownership and DDL Convention

Default privileges only apply to objects created by admin roles.

That means:

  • Run DDL as dbuser_dba / postgres
  • Or business admins SET ROLE dbrole_admin before DDL

Otherwise, new objects fall outside the default privilege system and break least privilege.


Database Isolation

Database-level isolation uses revokeconn:

pg_databases:
  - { name: appdb, owner: dbuser_app, revokeconn: true }

Problems solved

  • One account can “pierce” all databases
  • Multi-tenant DBs lack boundaries

Public Privilege Tightening

Pigsty revokes CREATE on the public schema during init:

REVOKE CREATE ON SCHEMA public FROM PUBLIC;

Problems solved

  • Unauthorized users create objects
  • “Shadow tables/functions” security risks

Offline Role Usage

dbrole_offline can only access offline instances (pg_role=offline or pg_offline_query=true).

Problems solved

  • ETL/analysis affects production performance
  • Personal accounts run risky queries on primary

Best Practices

  • Use dbrole_readwrite or dbrole_readonly for business accounts.
  • Run production DDL via admin roles.
  • Enable revokeconn for multi-tenant isolation.
  • Use dbrole_offline for reporting/ETL.

Next

3.7.4 - Encrypted Communication and Local CA

Pigsty includes a self-signed CA to issue TLS certificates and encrypt network traffic.

Encrypted communication solves three problems:

  • Eavesdropping: prevent plaintext traffic sniffing
  • Tampering: prevent MITM modification
  • Impersonation: prevent fake servers/clients

Pigsty uses a local CA + TLS to provide a unified trust root for databases and infrastructure components.


Role of the Local CA

Pigsty generates a self-signed CA on the admin node by default:

files/pki/ca/ca.key   # CA private key (must be protected)
files/pki/ca/ca.crt   # CA root certificate (distributable)

Default values in source:

  • ca_create: true: auto-generate if CA not found.
  • ca_cn: pigsty-ca: CA certificate CN fixed to pigsty-ca.
  • Root cert validity about 100 years (self-signed).
  • Server/client cert validity cert_validity: 7300d (20 years).

Certificate Coverage

The local CA issues certs for multiple components with a unified trust chain:

ComponentPurposeTypical Path
PostgreSQL / PgBouncerconnection encryption/pg/cert/
PatroniAPI communication/pg/cert/
etcdDCS encryption/etc/etcd/
MinIOobject storage HTTPS~minio/.minio/certs/
Nginxweb ingress HTTPS/etc/nginx/conf.d/cert/

Problem solved: different components issuing their own certs create fragmented trust; a unified CA enables one distribution, many uses.


Trust Distribution

Pigsty distributes ca.crt to all nodes and adds it to system trust:

  • Cert path: /etc/pki/ca.crt
  • EL family: /etc/pki/ca-trust/source/anchors/
  • Debian/Ubuntu: /usr/local/share/ca-certificates/

This allows system clients to trust Pigsty-issued certificates automatically.


Using an External CA

If you already have an enterprise CA, replace:

files/pki/ca/ca.key
files/pki/ca/ca.crt

Recommended:

ca_create: false

Problem solved: prevents accidental generation of a new self-signed CA and trust chain confusion.


Client Certificate Authentication

Certificate auth can replace or enhance password auth:

  • Avoid password phishing or leakage
  • Certificates can bind device and account

Pigsty ships cert.yml to issue client certificates:

./cert.yml -e cn=dbuser_dba
./cert.yml -e cn=dbuser_monitor

Generated by default at:

files/pki/misc/<cn>.key
files/pki/misc/<cn>.crt

Key Protection and Rotation

  • CA private key is 0600 by default and stored in a 0700 directory.
  • If the CA private key leaks, regenerate the CA and re-issue all certs.
  • Rotate certificates after major upgrades or key incidents.

Next

3.7.5 - Data Security

Data integrity, backup and recovery, encryption and audit.

Data security focuses on three things: integrity, recoverability, confidentiality. Pigsty enables key capabilities by default and supports further hardening.


Data Integrity

Problems solved

  • Silent corruption from bad disks or memory errors
  • Accidental writes causing data pollution

Pigsty support

  • Data checksums: default pg_checksum: true, enables data-checksums at init.
  • Replica fallback: recover bad blocks from replicas (with HA).

Recoverability (Backup and PITR)

Problems solved

  • Accidental deletion or modification
  • Disaster-level data loss

Pigsty support

  • pgBackRest enabled by default: pgbackrest_enabled: true.
  • Local repository: keeps 2 full backups by default.
  • Remote repository: MinIO support, object storage and multi-replica.
  • PITR: recover to any point in time with WAL archive.

Data Confidentiality

Problems solved

  • Backup theft leading to data leakage
  • Media theft leaking plaintext data

Pigsty support

  • Backup encryption: MinIO repo supports AES-256-CBC (cipher_type).
  • Transparent encryption (optional): pg_tde and similar extensions for at-rest encryption.
  • Key isolation: keep cipher_pass separate from CA private keys.

Audit and Traceability

Problems solved

  • No accountability or audit trail
  • Compliance audits lack evidence

Pigsty support

  • Log collection: templates enable logging_collector by default.
  • DDL audit: log_statement: ddl.
  • Slow queries: log_min_duration_statement.
  • Connection logs: log_connections (PG18+).
  • Audit extensions: pgaudit, pgauditlogtofile optional.

Hardening Recommendations

  • Enforce encryption and dedicated keys for remote backups.
  • Drill PITR regularly and verify the recovery chain.
  • Enable pgaudit for critical workloads.
  • Pair with High Availability for “backup + replica” double safety.

Next

3.7.6 - Compliance Checklist

Map Pigsty security capabilities and evidence preparation using SOC2 and MLPS Level 3.

Compliance is not a switch, but a combination of configuration + process + evidence:

  • Configuration: are security capabilities enabled (HBA/TLS/audit/backup)?
  • Process: access management, change control, backup drills
  • Evidence: logs, config snapshots, backup reports, monitoring alerts

This page uses SOC2 and MLPS Level 3 as entry points to map Pigsty’s security capabilities and compliance evidence.


Default Credentials Checklist (Must Change)

From source defaults:

ComponentDefault UsernameDefault Password
PostgreSQL Admindbuser_dbaDBUser.DBA
PostgreSQL Monitordbuser_monitorDBUser.Monitor
PostgreSQL ReplicationreplicatorDBUser.Replicator
Patroni APIpostgresPatroni.API
HAProxy Adminadminpigsty
Grafana Adminadminpigsty
MinIO RootminioadminS3User.MinIO
etcd RootrootEtcd.Root

Must change all defaults in production.


Evidence TypeDescriptionPigsty Support
Config snapshotsHBA, roles, TLS, backup policypigsty.yml / inventory config
Access controlroles and privilegespg_default_roles / pg_default_privileges
Connection auditconnect/disconnect/DDLlog_connections / log_statement
Backup reportsfull backup and restore recordspgBackRest logs and jobs
Monitoring alertsabnormal eventsPrometheus + Grafana
Certificate managementCA/cert distribution recordsfiles/pki/ / /etc/pki/ca.crt

SOC2 Perspective (Example Mapping)

SOC2 focuses on security, availability, confidentiality. Below is a conceptual mapping of common controls:

Control (SOC2)ProblemPigsty CapabilityProcess Needed
CC6 Logical access controlUnauthorized accessHBA + RBAC + default privilegesAccess approval and periodic audit
CC6 Auth strengthWeak/reused passwordsSCRAM + passwordcheckPassword rotation policy
CC6 Transport encryptionPlaintext transportTLS/CA, ssl/certEnforced TLS policy
CC7 MonitoringIncidents unnoticedPrometheus/GrafanaAlert handling process
CC7 Audit trailNo accountabilityconnection/DDL/slow query logs, pgauditLog retention and review
CC9 Business continuityData not recoverablepgBackRest + PITRRegular recovery drills

This is a conceptual mapping. SOC2 requires organizational policies and audit evidence.


MLPS Level 3 (GB/T 22239-2019) Mapping

MLPS Level 3 focuses on identity, access control, audit, data security, communication security, host security, and network boundary. Below is a mapping of key controls:

ControlProblemPigsty CapabilityConfig/Process Needed
Identity uniquenessShared accountsUnique users + SCRAMAccount management process
Password complexityWeak passwordspasswordcheck/credcheckEnable extensions
Password rotationLong-term riskexpire_inRotation policy
Access controlPrivilege abuseRBAC + default privilegesAccess approvals
Least privilegePrivilege sprawlFour-tier role modelAccount tiering
Transport confidentialityPlaintext leakageTLS/CA, HBA ssl/certEnforce TLS
Security auditNo accountabilityconnection/DDL/slow query logs + pgauditLog retention
Data integritySilent corruptionpg_checksum: true-
Backup and recoveryData losspgBackRest + PITRDrills and acceptance
Host securityHost compromiseSELinux/firewallHardening policy
Boundary securityExposed entryHAProxy/Nginx unified ingressNetwork segmentation
Security management systemLack of process-Policies and approvals

Tip: MLPS Level 3 is not only technical; it requires strong operations processes.


Compliance Hardening Snippets

# Enforce SSL / certificates
pg_hba_rules:
  - { user: '+dbrole_readonly', db: all, addr: intra, auth: ssl }
  - { user: dbuser_dba, db: all, addr: world, auth: cert }

# Password strength
pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
pg_extensions: [ passwordcheck, credcheck ]

# PgBouncer / Patroni TLS
pgbouncer_sslmode: require
patroni_ssl_enabled: true

# OS security
node_firewall_mode: zone
node_selinux_mode: enforcing

Compliance Checklist

Before Deployment

  • Network segmentation and trusted CIDRs defined
  • Certificate policy decided (self-signed / enterprise CA)
  • Account and privilege tiering plan confirmed

After Deployment (Must)

  • Change all default passwords
  • Verify HBA rules meet expectations
  • Enable and verify TLS
  • Configure audit and log retention policies

Periodic Maintenance

  • Permission audit and account cleanup
  • Certificate rotation
  • Backup and recovery drills

Next

4 - Get Started

Deploy Pigsty single-node version on your laptop/cloud server, access DB and Web UI

Pigsty uses a scalable architecture design, suitable for both large-scale production environments and single-node development/demo environments. This guide focuses on the latter.

If you intend to learn about Pigsty, you can start with the Quick Start single-node deployment. A Linux virtual machine with 1C/2G is sufficient to run Pigsty.

You can use a Linux MiniPC, free/discounted virtual machines provided by cloud providers, Windows WSL, or create a virtual machine on your own laptop for Pigsty deployment. Pigsty provides out-of-the-box Vagrant templates and Terraform templates to help you provision Linux VMs with one click locally or in the cloud.

pigsty-arch

The single-node version of Pigsty includes all core features: 440+ PG extensions, self-contained Grafana/Victoria monitoring, IaC provisioning capabilities, and local PITR point-in-time recovery. If you have external object storage (for PostgreSQL PITR backup), then for scenarios like demos, personal websites, and small services, even a single-node environment can provide a certain degree of data persistence guarantee. However, single-node cannot achieve High Availability—automatic failover requires at least 3 nodes.

If you want to install Pigsty in an environment without internet connection, please refer to the Offline Install mode. If you only need the PostgreSQL database itself, please refer to the Slim Install mode. If you are ready to start serious multi-node production deployment, please refer to the Deployment Guide.


Quick Start

Prepare a node with compatible Linux system, and execute as an admin user with passwordless ssh and sudo privileges:

curl -fsSL https://repo.pigsty.io/get | bash  # Install Pigsty and dependencies
cd ~/pigsty; ./configure -g                   # Generate config (with 1-node template, -g generates random passwords)
./deploy.yml                                  # Execute deployment playbook

Yes, it’s that simple. You can use pre-configured templates to bring up Pigsty with one click without understanding any details.

Next, you can explore the Graphical User Interface, access PostgreSQL database services; or perform configuration customization and execute playbooks to deploy more clusters.

4.1 - Single-Node Installation

Get started with Pigsty—complete single-node install on a fresh Linux host!

This is the Pigsty single-node install guide. For multi-node HA prod deployment, refer to the Deployment docs.

Pigsty single-node installation consists of three steps: Install, Configure, and Deploy.


Summary

Prepare a node with compatible OS, and run as an admin user with nopass ssh and sudo:

curl -fsSL https://repo.pigsty.io/get | bash;
curl -fsSL https://repo.pigsty.cc/get | bash;

This command runs the install script, downloads and extracts Pigsty source to your home directory and installs dependencies. Then complete Configure and Deploy:

cd ~/pigsty      # Enter Pigsty directory
./configure -g   # Generate config file (optional, skip if you know how to configure)
./deploy.yml     # Execute deployment playbook based on generated config

After installation, access the Web UI via IP/domain + port 80/443 through Nginx, and access the default PostgreSQL service via port 5432.

The complete process takes 3–10 minutes depending on server specs/network. Offline installation speeds this up significantly; for monitoring-free setups, use Slim Install for even faster deployment.

Video Example: Online Single-Node Installation (Debian 13, x86_64)


Prepare

Installing Pigsty involves some preparation work. Here’s a checklist.

For single-node installations, many constraints can be relaxed—typically you only need to know your IP address. If you don’t have a static IP, use 127.0.0.1.

ItemRequirementItemRequirement
Node1-node, at least 1C2G, no upper limitDisk/data mount point, xfs recommended
OSLinux x86_64 / aarch64, EL/Debian/UbuntuNetworkStatic IPv4; single-node without fixed IP can use 127.0.0.1
SSHnopass SSH login via public keySUDOsudo privilege, preferably with nopass option

Typically, you only need to focus on your local IP address—as an exception, for single-node deployment, use 127.0.0.1 if no static IP available.


Install

Use the following commands to auto-install Pigsty source to ~/pigsty (recommended). Deployment dependencies (Ansible) are installed automatically.

curl -fsSL https://repo.pigsty.io/get | bash            # Install latest stable version
curl -fsSL https://repo.pigsty.io/get | bash -s v4.0.0  # Install specific version
curl -fsSL https://repo.pigsty.cc/get | bash            # Install latest stable version
curl -fsSL https://repo.pigsty.cc/get | bash -s v4.0.0  # Install specific version

If you prefer not to run a remote script, you can manually download or clone the source. When using git, always checkout a specific version before use.

git clone https://github.com/pgsty/pigsty; cd pigsty;
git checkout v4.0.0;  # Always checkout a specific version when using git

For manual download/clone installations, run the bootstrap script to install Ansible and other dependencies. You can also install them yourself.

./bootstrap           # Install ansible for subsequent deployment

Configure

In Pigsty, deployment blueprints are defined by the inventory, the pigsty.yml configuration file. You can customize through declarative configuration.

Pigsty provides the configure script as an optional configuration wizard, which generates an inventory with good defaults based on your environment and input:

./configure -g                # Use config wizard to generate config with random passwords

The generated config file is at ~/pigsty/pigsty.yml by default. Review and customize as needed before installation.

Many configuration templates are available for reference. You can skip the wizard and directly edit pigsty.yml:

./configure                  # Default template, install PG 18 with essential extensions
./configure -v 17            # Use PG 17 instead of default PG 18
./configure -c rich          # Create local repo, download all extensions, install major ones
./configure -c slim          # Minimal install template, use with ./slim.yml playbook
./configure -c app/supa      # Use app/supa self-hosted Supabase template
./configure -c ivory         # Use IvorySQL kernel instead of native PG
./configure -i 10.11.12.13   # Explicitly specify primary IP address
./configure -r china         # Use China mirrors instead of default repos
./configure -c ha/full -s    # Use 4-node sandbox template, skip IP replacement/detection
Example configure output
$ ./configure

configure pigsty v4.0.0 begin
[ OK ] region  = default
[ OK ] kernel  = Linux
[ OK ] machine = x86_64
[ OK ] package = rpm,dnf
[ OK ] vendor  = rocky (Rocky Linux)
[ OK ] version = 9 (9.6)
[ OK ] sudo = vagrant ok
[ OK ] ssh = [email protected] ok
[WARN] Multiple IP address candidates found:
    (1) 192.168.121.24	inet 192.168.121.24/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
    (2) 10.10.10.12	    inet 10.10.10.12/24 brd 10.10.10.255 scope global noprefixroute eth1
[ IN ] INPUT primary_ip address (of current meta node, e.g 10.10.10.10):
=> 10.10.10.12    # <------- INPUT YOUR PRIMARY IPV4 ADDRESS HERE!
[ OK ] primary_ip = 10.10.10.12 (from input)
[ OK ] admin = [email protected] ok
[ OK ] mode = meta (el9)
[ OK ] locale  = C.UTF-8
[ OK ] configure pigsty done
proceed with ./deploy.yml

Common configure arguments:

ArgumentDescription
-i|--ipPrimary internal IP of current host, replaces placeholder 10.10.10.10
-c|--confConfig template name relative to conf/, without .yml suffix
-v|--versionPostgreSQL major version: 13, 14, 15, 16, 17, 18
-r|--regionUpstream repo region for faster downloads: (default|china|europe)
-n|--non-interactiveUse command-line args for primary IP, skip interactive wizard
-x|--proxyUse current env vars to configure proxy_env

If your machine has multiple IPs bound, use -i|--ip <ipaddr> to explicitly specify the primary IP, or provide it in the interactive prompt. The script replaces the placeholder 10.10.10.10 with your node’s primary IPv4 address. Choose a static IP; do not use public IPs.


Deploy

Pigsty’s deploy.yml playbook applies the blueprint from Configure to target nodes.

./deploy.yml     # Deploy all defined modules on current node at once
Example deployment output
......

TASK [pgsql : pgsql init done] *************************************************
ok: [10.10.10.11] => {
    "msg": "postgres://10.10.10.11/postgres | meta  | dbuser_meta dbuser_view "
}
......

TASK [pg_monitor : load grafana datasource meta] *******************************
changed: [10.10.10.11]

PLAY RECAP *********************************************************************
10.10.10.11                : ok=302  changed=232  unreachable=0    failed=0    skipped=65   rescued=0    ignored=1
localhost                  : ok=6    changed=3    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

When you see pgsql init done, PLAY RECAP and similar output at the end, installation is complete!



Interface

After single-node installation, you typically have four modules installed on the current node: PGSQL, INFRA, NODE, and ETCD.

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1infra-1etcd-1

The INFRA module provides a graphical management interface, accessible via Nginx on ports 80/443.

The PGSQL module provides a PostgreSQL database server, listening on 5432, also accessible via Pgbouncer/HAProxy proxies.


More

Use the current node as a base to deploy and monitor more clusters: add cluster definitions to the inventory and run:

bin/node-add   pg-test      # Add the 3 nodes of cluster pg-test to Pigsty management
bin/pgsql-add  pg-test      # Initialize a 3-node pg-test HA PG cluster
bin/redis-add  redis-ms     # Initialize Redis cluster: redis-ms

Most modules require the NODE module installed first. See available modules for details:

PGSQL, INFRA, NODE, ETCD, MINIO, REDIS, FERRET, DOCKER……

4.2 - Docker Deployment

Spin up Pigsty in Docker containers for quick testing on macOS/Windows

Pigsty is designed for native Linux, but can also run in Linux containers with systemd. If you don’t have native Linux (e.g., macOS or Windows), use Docker to spin up a local single-node Pigsty for testing.


Quick Start

Enter the docker/ dir in Pigsty source and launch with one command:

cd ~/pigsty/docker
make launch          # Start container + generate config + deploy

After deployment, access services:

ServiceURL / CommandCredentials
SSHssh root@localhost -p 2222Password: pigsty
Web Portalhttp://localhost:8080-
Grafanahttp://localhost:8080/uiadmin / pigsty
PostgreSQLpsql postgres://dbuser_dba:DBUser.DBA@localhost:5432/postgresDBUser.DBA

Prepare

Docker deployment requires:

ItemRequirementItemRequirement
DockerDocker 20.10+ (Desktop or CE)CPUAt least 1 core
RAMAt least 2GBDiskAt least 20GB free

Ensure default host ports (2222/8080/8443/5432) are available, or edit .env first.


Image

Pigsty provides an out-of-the-box Docker image on Docker Hub.

ImagePullSizeContents
pgsty/pigsty~500MB1.3GBDebian 13 + systemd + SSH + pig + Ansible
  • Supports both amd64 (x86_64) and arm64 (Apple Silicon, AWS Graviton)
  • Tags match Pigsty versions: v4.0.0, latest, etc.
  • Pre-configured with docker template, ready to run ./deploy.yml

Built on Debian 13 (Trixie), pre-installed with pig CLI and Ansible, Pigsty source already initialized.


Launch

Pigsty provides out-of-the-box Docker support in the docker/ source directory.

Simplest way is make launch, which auto-completes: start container, generate config, and deploy:

cd ~/pigsty/docker
make launch          # One-liner: up + config + deploy

Or step by step for inspection at each stage:

cd ~/pigsty/docker
make up              # Start container
make exec            # Enter container
./configure -c docker -g --ip 127.0.0.1  # Generate config (optional, pre-configured)
./deploy.yml         # Execute deployment

To build locally instead of pulling from Docker Hub:

cd ~/pigsty/docker
make build           # Build image locally
make launch          # Start container + generate config + deploy

Config

Customize image version and port mappings via .env:

PIGSTY_VERSION=v4.0.0         # Image tag, matches Pigsty version
PIGSTY_SSH_PORT=2222          # SSH port
PIGSTY_HTTP_PORT=8080         # Nginx HTTP port
PIGSTY_HTTPS_PORT=8443        # Nginx HTTPS port
PIGSTY_PG_PORT=5432           # PostgreSQL port

Port Mapping:

Env VarDefaultContainerDescription
PIGSTY_VERSIONv4.0.0-Image version tag
PIGSTY_SSH_PORT222222SSH access port
PIGSTY_HTTP_PORT808080Nginx HTTP port
PIGSTY_HTTPS_PORT8443443Nginx HTTPS port
PIGSTY_PG_PORT54325432PostgreSQL port

Override via env vars if defaults are occupied:

PIGSTY_HTTP_PORT=8888 docker compose up -d

Commands

Pigsty Docker provides Makefile commands for container and image management.

Docker Compose

Recommended way to run:

make up           # Start container
make down         # Stop and remove container
make start        # Start stopped container
make stop         # Stop container
make restart      # Restart container
make pull         # Pull latest image
make config       # Run ./configure in container
make deploy       # Run ./deploy.yml in container
make launch       # One-liner: up + config + deploy

Container Access

make exec         # Enter container bash
make ssh          # SSH into container
make log          # View container logs
make status       # View systemd status
make ps           # View process list
make conf         # View config file
make pass         # View passwords in config

Image Build

make build        # Build image locally
make buildnc      # Build without cache
make push         # Build and push multi-arch image

Image Management

make save         # Export image to pigsty-<version>-<arch>.tgz
make load         # Import image from tgz file
make rmi          # Remove current version's pigsty image

Cleanup

make clean        # Stop and remove container
make purge        # Remove container and wipe data (prompts)

Manual Run

If you prefer docker run over Docker Compose:

mkdir -p ./data
docker run -d --privileged --name pigsty \
  -p 2222:22 -p 8080:80 -p 5432:5432 \
  -v ./data:/data \
  pgsty/pigsty:v4.0.0

docker exec -it pigsty ./configure -c docker -g --ip 127.0.0.1
docker exec -it pigsty ./deploy.yml

Or use Makefile’s make run:

make run          # Start with docker run
make exec         # Enter container
make clean        # Stop and remove container
make purge        # Remove container and wipe data

How It Works

Pigsty Docker image is based on Debian 13 (Trixie) with systemd as init. Service management inside container stays consistent with native Linux via systemctl.

Key features:

  • systemd support: Full systemd for proper service management
  • SSH access: Pre-configured SSH, root password is pigsty
  • Privileged mode: Requires --privileged for systemd
  • Data persistence: Via /data volume mount
  • Pre-installed: pig CLI + Ansible, Pigsty source initialized

Image build executes these init steps:

# Install pig CLI
RUN echo "deb [trusted=yes] https://repo.pigsty.io/apt/infra/ generic main" \
    > /etc/apt/sources.list.d/pigsty.list \
    && apt-get update && apt-get install -y pig

# Initialize Pigsty source and install Ansible
RUN pig sty init -v ${PIGSTY_VERSION} \
    && pig sty boot \
    && pig sty conf -c docker --ip 127.0.0.1

Running ./configure with -c docker applies the Docker-optimized config template:

  • Uses 127.0.0.1 as default IP
  • Tuned for container environment

FAQ

Container won’t start

Ensure Docker is properly installed with sufficient resources. On Docker Desktop, allocate at least 2GB RAM. Check for port conflicts on 2222, 8080, 8443, 5432.

Can’t access services

Web Portal and PostgreSQL only available after deployment. Ensure ./deploy.yml finished successfully. Use make status to check service status.

Port conflicts

Override via .env or env vars:

PIGSTY_HTTP_PORT=8888 PIGSTY_PG_PORT=5433 docker compose up -d

Data persistence

Container data mounted to ./data. To wipe and start fresh:

make purge        # Remove container and wipe data (prompts)

macOS performance

On macOS with Docker Desktop, performance is worse than native Linux due to virtualization overhead. Expected—Docker deployment is for dev/testing. For production, use native Linux installation.


More

4.3 - Web Interface

Explore Pigsty’s Web graphical management interface, Grafana dashboards, and how to access them via domain names and HTTPS.

After single-node installation, you’ll have the INFRA module installed on the current node, which includes an out-of-the-box Nginx web server.

The default server configuration provides a WebUI graphical interface for displaying monitoring dashboards and unified proxy access to other component web interfaces.


Access

You can access this graphical interface by entering the deployment node’s IP address in your browser. By default, Nginx serves on standard ports 80/443.

Direct IP AccessDomain (HTTP)Domain (HTTPS)Demo
http://10.10.10.10http://i.pigstyhttps://i.pigstyhttps://demo.pigsty.io


Monitoring

To access Pigsty’s monitoring system dashboards (Grafana), visit the /ui endpoint on the server.

Direct IP AccessDomain (HTTP)Domain (HTTPS)Demo
http://10.10.10.10/uihttp://i.pigsty/uihttps://i.pigsty/uihttps://demo.pigsty.io/ui

If your service is exposed to Internet or office network, we recommend accessing via domain names and enabling HTTPS encryption—only minimal configuration is needed.


Endpoints

By default, Nginx exposes the following endpoints via different paths on the default server at ports 80/443:

EndpointComponentNative PortDescriptionPublic Demo
/Nginx80/443Homepage, local repo, file servicedemo.pigsty.io
/ui/Grafana3000Grafana dashboard portaldemo.pigsty.io/ui/
/vmetrics/VictoriaMetrics8428Time series database Web UIdemo.pigsty.io/vmetrics/
/vlogs/VictoriaLogs9428Log database Web UIdemo.pigsty.io/vlogs/
/vtraces/VictoriaTraces10428Distributed tracing Web UIdemo.pigsty.io/vtraces/
/vmalert/VMAlert8880Alert rule managementdemo.pigsty.io/vmalert/
/alertmgr/AlertManager9059Alert management Web UIdemo.pigsty.io/alertmgr/
/blackbox/Blackbox9115Blackbox exporter
/haproxy/*HAProxy9101Load balancer admin Web UI
/pevPEV280PostgreSQL execution plan visualizerdemo.pigsty.io/pev
/nginxNginx80Nginx status page (for metrics)

Domain Access

If you have your own domain name, you can point it to Pigsty server’s IP address to access various services via domain.

If you want to enable HTTPS, you should modify the home server configuration in the infra_portal parameter:

all:
  vars:
    infra_portal:
      home : { domain: i.pigsty } # Replace i.pigsty with your domain
all:
  vars:
    infra_portal:  # domain specifies the domain name  # certbot parameter specifies certificate name
      home : { domain: demo.pigsty.io ,certbot: mycert }

You can run make cert command after deployment to apply for a free Let’s Encrypt certificate for the domain. If you don’t define the certbot field, Pigsty will use the local CA to issue a self-signed HTTPS certificate by default. In this case, you must first trust Pigsty’s self-signed CA to access normally in your browser.

You can also mount local directories and other upstream services to Nginx. For more management details, refer to INFRA Management - Nginx.

4.4 - Getting Started with PostgreSQL

Get started with PostgreSQL—connect using CLI and graphical clients

PostgreSQL (abbreviated as PG) is the world’s most advanced and popular open-source relational database. Use it to store and retrieve multi-modal data.

This guide is for developers with basic Linux CLI experience but not very familiar with PostgreSQL, helping you quickly get started with PG in Pigsty.

We assume you’re a personal user deploying in the default single-node mode. For prod multi-node HA cluster access, refer to Prod Service Access.


Basics

In the default single-node installation template, you’ll create a PostgreSQL database cluster named pg-meta on the current node, with only one primary instance.

PostgreSQL listens on port 5432, and the cluster has a preset database meta available for use.

After installation, exit the current admin user ssh session and re-login to refresh environment variables. Then simply type p and press Enter to access the database cluster via the psql CLI tool:

vagrant@pg-meta-1:~$ p
psql (18.1 (Ubuntu 18.1-1.pgdg24.04+2))
Type "help" for help.

postgres=#

You can also switch to the postgres OS user and execute psql directly to connect to the default postgres admin database.


Connecting to Database

To access a PostgreSQL database, use a CLI tool or graphical client and fill in the PostgreSQL connection string:

postgres://username:password@host:port/dbname

Some drivers and tools may require you to fill in these parameters separately. The following five are typically required:

ParameterDescriptionExample ValueNotes
hostDatabase server address10.10.10.10Replace with your node IP or domain; can omit for localhost
portPort number5432PG default port, can be omitted
usernameUsernamedbuser_dbaPigsty default database admin
passwordPasswordDBUser.DBAPigsty default admin password (change this!)
dbnameDatabase namemetaDefault template database name

For personal use, you can directly use the Pigsty default database superuser dbuser_dba for connection and management. The dbuser_dba has full database privileges. By default, if you specified the configure -g parameter when configuring Pigsty, the password will be randomly generated and saved in ~/pigsty/pigsty.yml:

cat ~/pigsty/pigsty.yml | grep pg_admin_password

Default Accounts

Pigsty’s default single-node template presets the following database users, ready to use out of the box:

UsernamePasswordRolePurpose
dbuser_dbaDBUser.DBASuperuserDatabase admin (change this!)
dbuser_metaDBUser.MetaBusiness adminApp R/W (change this!)
dbuser_viewDBUser.ViewerRead-only userData viewing (change this!)

For example, you can connect to the meta database in the pg-meta cluster using three different connection strings with three different users:

postgres://dbuser_dba:[email protected]:5432/meta
postgres://dbuser_meta:[email protected]:5432/meta
postgres://dbuser_view:[email protected]:5432/meta

Note: These default passwords are automatically replaced with random strong passwords when using configure -g. Remember to replace the IP address and password with actual values.


Using CLI Tools

psql is the official PostgreSQL CLI client tool, powerful and the first choice for DBAs and developers.

On a server with Pigsty deployed, you can directly use psql to connect to the local database:

# Simplest way: use postgres system user for local connection (no password needed)
sudo -u postgres psql

# Use connection string (recommended, most universal)
psql 'postgres://dbuser_dba:[email protected]:5432/meta'

# Use parameter form
psql -h 10.10.10.10 -p 5432 -U dbuser_dba -d meta

# Use env vars to avoid password appearing in command line
export PGPASSWORD='DBUser.DBA'
psql -h 10.10.10.10 -p 5432 -U dbuser_dba -d meta

After successful connection, you’ll see a prompt like this:

psql (18.1)
Type "help" for help.

meta=#

Common psql Commands

After entering psql, you can execute SQL statements or use meta-commands starting with \:

CommandDescriptionCommandDescription
Ctrl+CInterrupt queryCtrl+DExit psql
\?Show all meta commands\hShow SQL command help
\lList all databases\c dbnameSwitch to database
\d tableView table structure\d+ tableView table details
\duList all users/roles\dxList installed extensions
\dnList all schemas\dtList all tables

Executing SQL

In psql, directly enter SQL statements ending with semicolon ;:

-- Check PostgreSQL version
SELECT version();

-- Check current time
SELECT now();

-- Create a test table
CREATE TABLE test (id SERIAL PRIMARY KEY, name TEXT, created_at TIMESTAMPTZ DEFAULT now());

-- Insert data
INSERT INTO test (name) VALUES ('hello'), ('world');

-- Query data
SELECT * FROM test;

-- Drop test table
DROP TABLE test;

Using Graphical Clients

If you prefer graphical interfaces, here are some popular PostgreSQL clients:

Grafana

Pigsty’s INFRA module includes Grafana with a pre-configured PostgreSQL data source (Meta). You can directly query the database using SQL from the Grafana Explore panel through the browser graphical interface, no additional client tools needed.

Grafana’s default username is admin, and the password can be found in the grafana_admin_password field in the inventory (default pigsty).

DataGrip

DataGrip is a professional database IDE from JetBrains, with powerful features. IntelliJ IDEA’s built-in Database Console can also connect to PostgreSQL in a similar way.

DBeaver

DBeaver is a free open-source universal database tool supporting almost all major databases. It’s a cross-platform desktop client.

pgAdmin

pgAdmin is the official PostgreSQL-specific GUI tool from PGDG, available through browser or as a desktop client.

Pigsty provides a configuration template for one-click pgAdmin service deployment using Docker in Software Template: pgAdmin.


Viewing Monitoring Dashboards

Pigsty provides many PostgreSQL monitoring dashboards, covering everything from cluster overview to single-table analysis.

We recommend starting with PGSQL Overview. Many elements in the dashboards are clickable, allowing you to drill down layer by layer to view details of each cluster, instance, database, and even internal database objects like tables, indexes, and functions.


Trying Extensions

One of PostgreSQL’s most powerful features is its extension ecosystem. Extensions can add new data types, functions, index methods, and more to the database.

Pigsty provides an unparalleled 440+ extensions in the PG ecosystem, covering 16 major categories including time-series, geographic, vector, and full-text search—install with one click. Start with three powerful and commonly used extensions that are automatically installed in Pigsty’s default template. You can also install more extensions as needed.

  • postgis: Geographic information system for processing maps and location data
  • pgvector: Vector database supporting AI embedding vector similarity search
  • timescaledb: Time-series database for efficient storage and querying of time-series data
\dx                            -- psql meta command, list installed extensions
TABLE pg_available_extensions; -- Query installed, available extensions
CREATE EXTENSION postgis;      -- Enable postgis extension

Next Steps

Congratulations on completing the PostgreSQL basics! Next, you can start configuring and customizing your database.

4.5 - Customize Pigsty with Configuration

Express your infra and clusters with declarative config files

Besides using the configuration wizard to auto-generate configs, you can write Pigsty config files from scratch. This tutorial guides you through building a complex inventory step by step.

If you define everything in the inventory upfront, a single deploy.yml playbook run completes all deployment—but it hides the details.

This doc breaks down all modules and playbooks, showing how to incrementally build from a simple config to a complete deployment.


Minimal Configuration

The simplest valid config only defines the admin_ip variable—the IP address of the node where Pigsty is installed (admin node):

all: { vars: { admin_ip: 10.10.10.10 } }
# Set region: china to use mirrors
all: { vars: { admin_ip: 10.10.10.10, region: china } }

This config deploys nothing, but running ./deploy.yml generates a self-signed CA in files/pki/ca for issuing certificates.

For convenience, you can also set region to specify which region’s software mirrors to use (default, china, europe).


Add Nodes

Pigsty’s NODE module manages cluster nodes. Any IP address in the inventory will be managed by Pigsty with the NODE module installed.

all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                   # Current node IP
    region: default                         # Default repos
    node_repo_modules: node,pgsql,infra     # Add node, pgsql, infra repos
all:  # Remember to replace 10.10.10.10 with your actual IP
  children: { nodes: { hosts: { 10.10.10.10: {} } } }
  vars:
    admin_ip: 10.10.10.10                 # Current node IP
    region: china                         # Use mirrors
    node_repo_modules: node,pgsql,infra   # Add node, pgsql, infra repos

We added two global parameters: node_repo_modules specifies repos to add; region specifies which region’s mirrors to use.

These parameters enable the node to use correct repositories and install required packages. The NODE module offers many customization options: node names, DNS, repos, packages, NTP, kernel params, tuning templates, monitoring, log collection, etc. Even without changes, the defaults are sufficient.

Run deploy.yml or more precisely node.yml to bring the defined node under Pigsty management.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10---Add node

Add Infrastructure

A full-featured RDS cloud database service needs infrastructure support: monitoring (metrics/log collection, alerting, visualization), NTP, DNS, and other foundational services.

Define a special group infra to deploy the INFRA module:

all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
all:  # Simply changed group name from nodes -> infra and added infra_seq
  children: { infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } } }
  vars:
    admin_ip: 10.10.10.10
    region: china
    node_repo_modules: node,pgsql,infra

We also assigned an identity parameter: infra_seq to distinguish nodes in multi-node HA INFRA deployments.

Run infra.yml to install INFRA**](/docs/infra/) and [**NODE modules on 10.10.10.10:

./infra.yml   # Install INFRA module on infra group (includes NODE module)

NODE module is implicitly defined as long as an IP exists. NODE is idempotent—re-running has no side effects.

After completion, you’ll have complete observability infrastructure and node monitoring, but PostgreSQL database service is not yet deployed.

If your goal is just to set up this monitoring system (Grafana + Victoria), you’re done! The infra template is designed for this. Everything in Pigsty is modular: you can deploy only monitoring infra without databases; or vice versa—run HA PostgreSQL clusters without infra—Slim Install.

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1--Add infrastructure

Deploy Database Cluster

To provide PostgreSQL service, install the PGSQL` module and its dependency ETCD—just two lines of config:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: default, node_repo_modules: node,pgsql,infra }
all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } } # Add pg cluster
  vars: { admin_ip: 10.10.10.10, region: china, node_repo_modules: node,pgsql,infra }

We added two new groups: etcd and pg-meta, defining a single-node etcd cluster and a single-node PostgreSQL cluster.

Use ./deploy.yml to redeploy everything, or incrementally deploy:

./etcd.yml  -l etcd      # Install ETCD module on etcd group
./pgsql.yml -l pg-meta   # Install PGSQL module on pg-meta group

PGSQL depends on ETCD for HA consensus, so install ETCD first. After completion, you have a working PostgreSQL service!

IDNODEINFRAETCDPGSQLDescription
110.10.10.10infra-1etcd-1pg-meta-1Add etcd and PostgreSQL cluster

We used node.yml, infra.yml, etcd.yml, and pgsql.yml to deploy all four core modules on a single machine.


Define Databases and Users

In Pigsty, you can customize PostgreSQL cluster internals like databases and users through the inventory:

all:
  children:
    # Other groups and variables hidden for brevity
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:       # Define database users
          - { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user  }
        pg_databases:   # Define business databases
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector] }
  • pg_users: Defines a new user dbuser_meta with password DBUser.Meta
  • pg_databases: Defines a new database meta with Pigsty CMDB schema (optional) and vector extension

Pigsty offers rich customization parameters covering all aspects of databases and users. If you define these parameters upfront, they’re automatically created during ./pgsql.yml execution. For existing clusters, you can incrementally create or modify users and databases:

bin/pgsql-user pg-meta dbuser_meta      # Ensure user dbuser_meta exists in pg-meta
bin/pgsql-db   pg-meta meta             # Ensure database meta exists in pg-meta

Configure PG Version and Extensions

You can install different major versions of PostgreSQL, and up to 440 extensions. Let’s remove the current default PG 18 and install PG 17:

./pgsql-rm.yml -l pg-meta   # Remove old pg-meta cluster (it's PG 18)

We can customize parameters to install and enable common extensions by default: timescaledb, postgis, and pgvector:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq:  1 } } } # Add etcd cluster
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_version: 17   # Specify PG version as 17
        pg_extensions: [ timescaledb, postgis, pgvector ]      # Install these extensions
        pg_libs: 'timescaledb,pg_stat_statements,auto_explain'  # Preload these extension libraries
        pg_databases: { { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [vector, postgis, timescaledb ] } }
        pg_users: { { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user } }

  vars:
    admin_ip: 10.10.10.10
    region: default
    node_repo_modules: node,pgsql,infra
./pgsql.yml -l pg-meta   # Install PG17 and extensions, recreate pg-meta cluster

Add More Nodes

Add more nodes to the deployment, bring them under Pigsty management, deploy monitoring, configure repos, install software…

# Add entire cluster at once, or add nodes individually
bin/node-add pg-test

bin/node-add 10.10.10.11
bin/node-add 10.10.10.12
bin/node-add 10.10.10.13

Deploy HA PostgreSQL Cluster

Now deploy a new database cluster pg-test on the three newly added nodes, using a three-node HA architecture:

all:
  children:
    infra:   { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:    { hosts: { 10.10.10.10: { etcd_seq: 1 } } }, vars: { etcd_cluster: etcd } }
    pg-meta: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }, vars: { pg_cluster: pg-meta } }
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica  }
        10.10.10.13: { pg_seq: 3, pg_role: replica  }
      vars: { pg_cluster: pg-test }

Deploy Redis Cluster

Pigsty provides optional Redis support as a caching service in front of PostgreSQL:

bin/redis-add redis-ms
bin/redis-add redis-meta
bin/redis-add redis-test

Redis HA requires cluster mode or sentinel mode. See Redis Configuration.


Deploy MinIO Cluster

Pigsty provides optional open-source object storage, S3 alternative—MinIO support, as backup repository for PostgreSQL.

./minio.yml -l minio

Serious prod MinIO deployments typically require at least 4 nodes with 4 disks each (4N/16D).


Deploy Docker Module

If you want to use containers to run tools for managing PG or software using PostgreSQL, install the DOCKER module:

./docker.yml -l infra

Use pre-made application templates to launch common software tools with one click, such as the GUI tool for PG management: Pgadmin:

./app.yml    -l infra -e app=pgadmin

You can even self-host enterprise-grade Supabase with Pigsty, using external HA PostgreSQL clusters as the foundation and running stateless components in containers.

4.6 - Run Playbooks with Ansible

Use Ansible playbooks to deploy and manage Pigsty clusters

Pigsty uses Ansible to manage clusters, a very popular large-scale/batch/automation ops tool in the SRE community.

Ansible can use declarative approach for server configuration management. All module deployments are implemented through a series of idempotent Ansible playbooks.

For example, in single-node deployment, you’ll use the deploy.yml playbook. Pigsty has more built-in playbooks, you can choose to use as needed.

Understanding Ansible basics helps with better use of Pigsty, but this is not required, especially for single-node deployment.


Deploy Playbook

Pigsty provides a “one-stop” deploy playbook deploy.yml, installing all modules on the current env in one go (if defined in config):

PlaybookCommandGroupinfra[nodes]etcdminio[pgsql]
infra.yml./infra.yml-l infra
node.yml./node.yml
etcd.yml./etcd.yml-l etcd
minio.yml./minio.yml-l minio
pgsql.yml./pgsql.yml

This is the simplest deployment method. You can also follow instructions in Customization Guide to incrementally complete deployment of all modules and nodes step by step.


Install Ansible

When using the Pigsty installation script, or the bootstrap phase of offline installation, Pigsty will automatically install ansible and its dependencies for you.

If you want to manually install Ansible, refer to the following instructions. The minimum supported Ansible version is 2.9.

sudo apt install -y ansible python3-jmespath
sudo dnf install -y ansible python-jmespath         # EL 10
sudo dnf install -y ansible python3.12-jmespath     # EL 9/8
brew install ansible
pip3 install jmespath

Ansible is also available on macOS. You can use Homebrew to install Ansible on Mac, and use it as an admin node to manage remote cloud servers. This is convenient for single-node Pigsty deployment on cloud VPS, but not recommended in prod envs.


Execute Playbook

Ansible playbooks are executable YAML files containing a series of task definitions to execute. Running playbooks requires the ansible-playbook executable in your environment variable PATH. Running ./node.yml playbook is essentially executing the ansible-playbook node.yml command.

You can use some parameters to fine-tune playbook execution. The following 4 parameters are essential for effective Ansible use:

PurposeParameterDescription
Target-l|--limit <pattern>Limit execution to specific groups/hosts/patterns
Tasks-t|--tags <tags>Only run tasks with specific tags
Params-e|--extra-vars <vars>Extra command-line parameters
Config-i|--inventory <path>Use a specific inventory file
./node.yml                         # Run node playbook on all hosts
./pgsql.yml -l pg-test             # Run pgsql playbook on pg-test cluster
./infra.yml -t repo_build          # Run infra.yml subtask repo_build
./pgsql-rm.yml -e pg_rm_pkg=false  # Remove pgsql, but keep packages (don't uninstall software)
./infra.yml -i conf/mynginx.yml    # Use another location's config file

Limit Hosts

Playbook execution targets can be limited with -l|--limit <selector>. This is convenient when running playbooks on specific hosts/nodes or groups/clusters. Here are some host limit examples:

./pgsql.yml                              # Run on all hosts (dangerous!)
./pgsql.yml -l pg-test                   # Run on pg-test cluster
./pgsql.yml -l 10.10.10.10               # Run on single host 10.10.10.10
./pgsql.yml -l pg-*                      # Run on hosts/groups matching glob `pg-*`
./pgsql.yml -l '10.10.10.11,&pg-test'    # Run on 10.10.10.11 in pg-test group
./pgsql-rm.yml -l 'pg-test,!10.10.10.11' # Run on pg-test, except 10.10.10.11

See all details in Ansible documentation: Patterns: targeting hosts and groups


Limit Tasks

Execution tasks can be controlled with -t|--tags <tags>. If specified, only tasks with the given tags will execute instead of the entire playbook.

./infra.yml -t repo          # Create repo
./node.yml  -t node_pkg      # Install node packages
./pgsql.yml -t pg_install    # Install PG packages and extensions
./etcd.yml  -t etcd_purge    # Destroy ETCD cluster
./minio.yml -t minio_alias   # Write MinIO CLI config

To run multiple tasks, specify multiple tags separated by commas -t tag1,tag2:

./node.yml  -t node_repo,node_pkg   # Add repos, then install packages
./pgsql.yml -t pg_hba,pg_reload     # Configure, then reload pg hba rules

Extra Vars

You can override config parameters at runtime using CLI arguments, which have highest priority.

Extra command-line parameters are passed via -e|--extra-vars KEY=VALUE, usable multiple times:

# Create admin using another admin user
./node.yml -e ansible_user=admin -k -K -t node_admin

# Initialize a specific Redis instance: 10.10.10.11:6379
./redis.yml -l 10.10.10.10 -e redis_port=6379 -t redis

# Remove PostgreSQL but keep packages and data
./pgsql-rm.yml -e pg_rm_pkg=false -e pg_rm_data=false

For complex parameters, use JSON strings to pass multiple complex parameters at once:

# Add repo and install packages
./node.yml -t node_install -e '{"node_repo_modules":"infra","node_packages":["duckdb"]}'

Specify Inventory

The default config file is pigsty.yml in the Pigsty home directory.

You can use -i <path> to specify a different inventory file path.

./pgsql.yml -i conf/rich.yml            # Initialize single node with all extensions per rich config
./pgsql.yml -i conf/ha/full.yml         # Initialize 4-node cluster per full config
./pgsql.yml -i conf/app/supa.yml        # Initialize 1-node Supabase deployment per supa.yml

Convenience Scripts

Pigsty provides a series of convenience scripts to simplify common operations. These scripts are in the bin/ directory:

bin/node-add   <cls>            # Add nodes to Pigsty management: ./node.yml -l <cls>
bin/node-rm    <cls>            # Remove nodes from Pigsty: ./node-rm.yml -l <cls>
bin/pgsql-add  <cls>            # Initialize PG cluster: ./pgsql.yml -l <cls>
bin/pgsql-rm   <cls>            # Remove PG cluster: ./pgsql-rm.yml -l <cls>
bin/pgsql-user <cls> <username> # Add business user: ./pgsql-user.yml -l <cls> -e username=<user>
bin/pgsql-db   <cls> <dbname>   # Add business database: ./pgsql-db.yml -l <cls> -e dbname=<db>
bin/redis-add  <cls>            # Initialize Redis cluster: ./redis.yml -l <cls>
bin/redis-rm   <cls>            # Remove Redis cluster: ./redis-rm.yml -l <cls>

These scripts are simple wrappers around Ansible playbooks, making common operations more convenient.


Playbook List

Below are the built-in playbooks in Pigsty. You can also easily add your own playbooks, or customize and modify playbook implementation logic as needed.

ModulePlaybookFunction
INFRAdeploy.ymlOne-click deploy Pigsty on current node
INFRAinfra.ymlInitialize Pigsty infrastructure on infra nodes
INFRAinfra-rm.ymlRemove infrastructure components from infra nodes
INFRAcache.ymlCreate offline packages from target node
INFRAcert.ymlIssue certificates using Pigsty self-signed CA
NODEnode.ymlInitialize node, adjust to desired state
NODEnode-rm.ymlRemove node from Pigsty
PGSQLpgsql.ymlInitialize HA PostgreSQL cluster or add replica
PGSQLpgsql-rm.ymlRemove PostgreSQL cluster or replica
PGSQLpgsql-db.ymlAdd new business database to existing cluster
PGSQLpgsql-user.ymlAdd new business user to existing cluster
PGSQLpgsql-pitr.ymlPerform point-in-time recovery on cluster
PGSQLpgsql-monitor.ymlMonitor remote PostgreSQL with local exporter
PGSQLpgsql-migration.ymlGenerate migration manual and scripts
PGSQLslim.ymlInstall Pigsty with minimal components
REDISredis.ymlInitialize Redis cluster/node/instance
REDISredis-rm.ymlRemove Redis cluster/node/instance
ETCDetcd.ymlInitialize ETCD cluster or add new member
ETCDetcd-rm.ymlRemove ETCD cluster/data or shrink member
MINIOminio.ymlInitialize MinIO cluster (optional pgBackRest repo)
MINIOminio-rm.ymlRemove MinIO cluster and data
DOCKERdocker.ymlInstall Docker on nodes
DOCKERapp.ymlInstall applications using Docker Compose
FERRETmongo.ymlInstall Mongo/FerretDB on nodes

4.7 - Offline Installation

Install Pigsty in air-gapped env using offline packages

Pigsty installs from Internet upstream by default, but some envs are isolated from the Internet. To address this, Pigsty supports offline installation using offline packages. Think of them as Linux-native Docker images.


Overview

Offline packages bundle all required RPM/DEB packages and dependencies; they are snapshots of the local APT/YUM repo after a normal installation.

In serious prod deployments, we strongly recommend using offline packages. They ensure all future nodes have consistent software versions with the existing env, and avoid online installation failures caused by upstream changes (quite common!), guaranteeing you can run it independently forever.


Offline Packages

We typically release offline packages for the following Linux distros, using the latest OS minor version.

Linux DistributionSystem CodeMinor VersionPackage
RockyLinux 8 x86_64el8.x86_648.10pigsty-pkg-v4.0.0.el8.x86_64.tgz
RockyLinux 8 aarch64el8.aarch648.10pigsty-pkg-v4.0.0.el8.aarch64.tgz
RockyLinux 9 x86_64el9.x86_649.6pigsty-pkg-v4.0.0.el9.x86_64.tgz
RockyLinux 9 aarch64el9.aarch649.6pigsty-pkg-v4.0.0.el9.aarch64.tgz
RockyLinux 10 x86_64el10.x86_6410.0pigsty-pkg-v4.0.0.el10.x86_64.tgz
RockyLinux 10 aarch64el10.aarch6410.0pigsty-pkg-v4.0.0.el10.aarch64.tgz
Debian 12 x86_64d12.x86_6412.11pigsty-pkg-v4.0.0.d12.x86_64.tgz
Debian 12 aarch64d12.aarch6412.11pigsty-pkg-v4.0.0.d12.aarch64.tgz
Debian 13 x86_64d13.x86_6413.2pigsty-pkg-v4.0.0.d13.x86_64.tgz
Debian 13 aarch64d13.aarch6413.2pigsty-pkg-v4.0.0.d13.aarch64.tgz
Ubuntu 24.04 x86_64u24.x86_6424.04.2pigsty-pkg-v4.0.0.u24.x86_64.tgz
Ubuntu 24.04 aarch64u24.aarch6424.04.2pigsty-pkg-v4.0.0.u24.aarch64.tgz
Ubuntu 22.04 x86_64u22.x86_6422.04.5pigsty-pkg-v4.0.0.u22.x86_64.tgz
Ubuntu 22.04 aarch64u22.aarch6422.04.5pigsty-pkg-v4.0.0.u22.aarch64.tgz

If you use an OS from the list above (exact minor version match), we recommend using offline packages. Pigsty provides ready-to-use pre-made offline packages for these systems, freely downloadable from GitHub.

You can find these packages on the GitHub release page:

6a26fa44f90a16c7571d2aaf0e997d07  pigsty-v4.0.0.tgz
537839201c536a1211f0b794482d733b  pigsty-pkg-v4.0.0.el9.x86_64.tgz
85687cb56517acc2dce14245452fdc05  pigsty-pkg-v4.0.0.el9.aarch64.tgz
a333e8eb34bf93f475c85a9652605139  pigsty-pkg-v4.0.0.el10.x86_64.tgz
4b98b463e2ebc104c35ddc94097e5265  pigsty-pkg-v4.0.0.el10.aarch64.tgz
4f62851c9d79a490d403f59deb4823f4  pigsty-pkg-v4.0.0.el8.x86_64.tgz
66e283c9f6bfa80654f7ed3ffb9b53e5  pigsty-pkg-v4.0.0.el8.aarch64.tgz
f7971d9d6aab1f8f307556c2f64b701c  pigsty-pkg-v4.0.0.d12.x86_64.tgz
c4d870e5ef61ed05724c15fbccd1220b  pigsty-pkg-v4.0.0.d12.aarch64.tgz
408991c5ff028b5c0a86fac804d64b93  pigsty-pkg-v4.0.0.d13.x86_64.tgz
8d7c9404b97a11066c00eb7fc1330181  pigsty-pkg-v4.0.0.d13.aarch64.tgz
2a25eff283332d9006854f36af6602b2  pigsty-pkg-v4.0.0.u24.x86_64.tgz
a4fb30148a2d363bbfd3bec0daa14ab6  pigsty-pkg-v4.0.0.u24.aarch64.tgz
87bb91ef703293b6ec5b77ae3bb33d54  pigsty-pkg-v4.0.0.u22.x86_64.tgz
5c81bdaa560dad4751840dec736fe404  pigsty-pkg-v4.0.0.u22.aarch64.tgz

Using Offline Packages

Offline installation steps:

  1. Download Pigsty offline package, place it at /tmp/pkg.tgz
  2. Download Pigsty source package, extract and enter directory (assume extracted to home: cd ~/pigsty)
  3. ./bootstrap, it will extract the package and configure using local repo (and install ansible from it offline)
  4. ./configure -g -c rich, you can directly use the rich template configured for offline installation, or configure yourself
  5. Run ./deploy.yml as usual—it will install everything from the local repo

If you want to use the already extracted and configured offline package in your own config, modify and ensure these settings:

  • repo_enabled: Set to true, will build local software repo (explicitly disabled in most templates)
  • node_repo_modules: Set to local, then all nodes in the env will install from the local software repo
    • In most templates, this is explicitly set to: node,infra,pgsql, i.e., install directly from these upstream repos.
    • Setting it to local will use the local software repo to install all packages, fastest, no interference from other repos.
    • If you want to use both local and upstream repos, you can add other repo module names too, e.g., local,node,infra,pgsql

The first parameter, if enabled, Pigsty will create a local software repo. The second parameter, if contains local, then all nodes in the env will use this local software repo. If it only contains local, then it becomes the sole repo for all nodes. If you still want to install other packages from other upstream repos, you can add other repo module names too, e.g., local,node,infra,pgsql.

Hybrid Installation Mode

If your env has Internet access, there’s a hybrid approach combining advantages of offline and online installation. You can use the offline package as a base, and supplement missing packages online.

For example, if you’re using RockyLinux 9.5 but the official offline package is for RockyLinux 9.6. You can use the el9 offline package (though made for 9.6), then execute make repo-build before formal installation to re-download missing packages for 9.5. Pigsty will download the required increments from upstream repos.


Making Offline Packages

If your OS isn’t in the default list, you can make your own offline package with the built-in cache.yml playbook:

  1. Find a node running the exact same OS version with Internet access
  2. Use rich config template to perform online installation (configure -c rich)
  3. cd ~/pigsty; ./cache.yml: make and fetch the offline package to ~/pigsty/dist/${version}/
  4. Copy the offline package to the env without Internet access (ftp, scp, usb, etc.), extract and use via bootstrap

We offer paid services providing tested, pre-made offline packages for specific Linux major.minor versions (¥200).


Bootstrap

Pigsty relies on ansible to execute playbooks; this script is responsible for ensuring ansible is correctly installed in various ways.

./bootstrap       # Ensure ansible is correctly installed (if offline package exists, use offline installation and extract first)

Usually, you need to run this script in two cases:

  • You didn’t install Pigsty via the installation script, but by downloading or git clone of the source package, so ansible isn’t installed.
  • You’re preparing to install Pigsty via offline packages and need to use this script to install ansible from the offline package.

The bootstrap script will automatically detect if the offline package exists (-p to specify, default is /tmp/pkg.tgz). If it exists, it will extract and use it, then install ansible from it. If the offline package doesn’t exist, it will try to install ansible from the Internet. If that still fails, you’re on your own!

4.8 - Slim Installation

Install only HA PostgreSQL clusters with minimal dependencies

If you only want HA PostgreSQL database cluster itself without monitoring, infra, etc., consider Slim Installation.

Slim installation has no INFRA module, no monitoring, no local repo—just ETCD and PGSQL and partial NODE functionality.


Overview

To use slim installation, you need to:

  1. Use the slim.yml slim install config template (configure -c slim)
  2. Run the slim.yml playbook instead of the default deploy.yml
curl https://repo.pigsty.io/get | bash
./configure -g -c slim
./slim.yml

Description

Slim installation only installs/configures these components:

ComponentRequiredDescription
patroni⚠️ RequiredBootstrap HA PostgreSQL cluster
etcd⚠️ RequiredMeta database dependency (DCS) for Patroni
pgbouncer✔️ OptionalPostgreSQL connection pooler
vip-manager✔️ OptionalL2 VIP binding to PostgreSQL cluster primary
haproxy✔️ OptionalAuto-routing services via Patroni health checks
chronyd✔️ OptionalTime synchronization with NTP server
tuned✔️ OptionalNode tuning template and kernel parameter management

You can disable all optional components via configuration, keeping only the required patroni and etcd.

Because there’s no INFRA module’s Nginx providing local repo service, offline installation only works in single-node mode.


Configuration

Slim installation config file example: conf/slim.yml:

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1No INFRA moduleetcd-1
---
#==============================================================#
# File      :   slim.yml
# Desc      :   Pigsty slim installation config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/slim
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for slim / minimal installation
# No monitoring & infra will be installed, just raw postgresql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c slim
#   ./slim.yml

all:
  children:

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        #10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        #10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        #10.10.10.11: { pg_seq: 2, pg_role: replica } # you can add more!
        #10.10.10.12: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Deployment

Slim installation uses the slim.yml playbook instead of deploy.yml:

./slim.yml

HA Cluster

Slim installation can also deploy HA clusters—just add more nodes to the etcd and pg-meta groups. A three-node deployment example:

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1No INFRA moduleetcd-1
210.10.10.11pg-meta-2No INFRA moduleetcd-2
310.10.10.12pg-meta-3No INFRA moduleetcd-3
all:
  children:
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
        10.10.10.11: { etcd_seq: 2 }  # <-- New
        10.10.10.12: { etcd_seq: 3 }  # <-- New

    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica } # <-- New
        10.10.10.12: { pg_seq: 3, pg_role: replica } # <-- New
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
  vars:
    # omitted ……

4.9 - Security Tips

Three security hardening tips for single-node quick-start deployment

For Demo/Dev single-node deployments, Pigsty’s default config is secure enough as long as you change default passwords.

If your deployment is exposed to Internet or office network, consider adding firewall rules to restrict port access and source IPs for enhanced security.

Additionally, we recommend protecting Pigsty’s critical files (config files and CA private key) from unauthorized access and backing them up regularly.

For enterprise prod envs with strict security requirements, refer to the Deployment - Security Hardening documentation for advanced configuration.


Passwords

Pigsty is an open-source project with well-known default passwords. If your deployment is exposed to Internet or office network, you must change all default passwords!

ModuleParameterDefault Value
INFRAgrafana_admin_passwordpigsty
INFRAgrafana_view_passwordDBUser.Viewer
PGSQLpg_admin_passwordDBUser.DBA
PGSQLpg_monitor_passwordDBUser.Monitor
PGSQLpg_replication_passwordDBUser.Replicator
PGSQLpatroni_passwordPatroni.API
NODEhaproxy_admin_passwordpigsty
MINIOminio_secret_keyS3User.MinIO
ETCDetcd_root_passwordEtcd.Root

To avoid manually modifying passwords, Pigsty’s configuration wizard provides automatic random strong password generation using the -g argument with configure.

$ ./configure -g
configure pigsty v4.0.0 begin
[ OK ] region = china
[WARN] kernel  = Darwin, can be used as admin node only
[ OK ] machine = arm64
[ OK ] package = brew (macOS)
[WARN] primary_ip = default placeholder 10.10.10.10 (macOS)
[ OK ] mode = meta (unknown distro)
[ OK ] locale  = C.UTF-8
[ OK ] generating random passwords...
    grafana_admin_password   : CdG0bDcfm3HFT9H2cvFuv9w7
    pg_admin_password        : 86WqSGdokjol7WAU9fUxY8IG
    pg_monitor_password      : 0X7PtgMmLxuCd2FveaaqBuX9
    pg_replication_password  : 4iAjjXgEY32hbRGVUMeFH460
    patroni_password         : DsD38QLTSq36xejzEbKwEqBK
    haproxy_admin_password   : uhdWhepXrQBrFeAhK9sCSUDo
    minio_secret_key         : z6zrYUN1SbdApQTmfRZlyWMT
    etcd_root_password       : Bmny8op1li1wKlzcaAmvPiWc
    DBUser.Meta              : U5v3CmeXICcMdhMNzP9JN3KY
    DBUser.Viewer            : 9cGQF1QMNCtV3KlDn44AEzpw
    S3User.Backup            : 2gjgSCFYNmDs5tOAiviCqM2X
    S3User.Meta              : XfqkAKY6lBtuDMJ2GZezA15T
    S3User.Data              : OygorcpCbV7DpDmqKe3G6UOj
[ OK ] random passwords generated, check and save them
[ OK ] ansible = ready
[ OK ] pigsty configured
[WARN] don't forget to check it and change passwords!
proceed with ./deploy.yml

Firewall

For deployments exposed to Internet or office networks, we strongly recommend configuring firewall rules to limit access IP ranges and ports.

You can use your cloud provider’s security group features, or Linux distribution firewall services (like firewalld, ufw, iptables, etc.) to implement this.

DirectionProtocolPortServiceDescription
InboundTCP22SSHAllow SSH login access
InboundTCP80NginxAllow Nginx HTTP access
InboundTCP443NginxAllow Nginx HTTPS access
InboundTCP5432PostgreSQLRemote database access, enable as needed

Pigsty supports configuring firewall rules to allow 22/80/443/5432 from external networks, but this is not enabled by default.


Files

In Pigsty, you need to protect the following files:

  • pigsty.yml: Pigsty main config file, contains access information and passwords for all nodes
  • files/pki/ca/ca.key: Pigsty self-signed CA private key, used to issue all SSL certificates in the deployment (auto-generated during deployment)

We recommend strictly controlling access permissions for these two files, regularly backing them up, and storing them in a secure location.

5 - Deployment

Multi-node, high-availability Pigsty deployment for serious production environments.

Unlike Getting Started, production Pigsty deployments require more Architecture Planning and Preparation.

This chapter helps you understand the complete deployment process and provides best practices for production environments.


Before deploying to production, we recommend testing in Pigsty’s Sandbox to fully understand the workflow. Use Vagrant to create a local 4-node sandbox, or leverage Terraform to provision larger simulation environments in the cloud.

pigsty-sandbox

For production, you typically need at least three nodes for high availability. You should understand Pigsty’s core Concepts and common administration procedures, including Configuration, Ansible Playbooks, and Security Hardening for enterprise compliance.

5.1 - Install Pigsty for Production

How to install Pigsty on Linux hosts for production?

This is the Pigsty production multi-node deployment guide. For single-node Demo/Dev setups, see Getting Started.


Summary

Prepare nodes with SSH access following your architecture plan, install a compatible Linux OS, then execute with an admin user having passwordless ssh and sudo:

curl -fsSL https://repo.pigsty.io/get | bash;         # International
curl -fsSL https://repo.pigsty.cc/get | bash;         # Backup Mirror

This runs the install script, downloading and extracting Pigsty source to your home directory with dependencies installed. Complete configuration and deployment to finish.

Before running deploy.yml for deployment, review and edit the configuration inventory: pigsty.yml.

cd ~/pigsty      # Enter Pigsty directory
./configure -g   # Generate config file (optional, skip if you know how to configure)
./deploy.yml     # Execute deployment playbook based on generated config

After installation, access the WebUI via IP/domain + ports 80/443, and PostgreSQL service via port 5432.

Full installation takes 3-10 minutes depending on specs/network. Offline installation significantly speeds this up; slim installation further accelerates when monitoring isn’t needed.

Video Example: 20-node Production Simulation (Ubuntu 24.04 x86_64)


Prepare

Production Pigsty deployment involves preparation work. Here’s the complete checklist:

ItemRequirementItemRequirement
NodeAt least 1C2G, no upper limitPlanMultiple homogeneous nodes: 2/3/4 or more
Disk/data as default mount pointFSxfs recommended; ext4/zfs as needed
VIPL2 VIP, optional (unavailable in cloud)NetworkStatic IPv4, single-node can use 127.0.0.1
CASelf-signed CA or specify existing certsDomainLocal/public domain, optional, default h.pigsty
KernelLinux x86_64 / aarch64Linuxel8, el9, el10, d12, d13, u22, u24
LocaleC.UTF-8 or CFirewallPorts: 80/443/22/5432 (optional)
UserAvoid root and postgresSudosudo privilege, preferably with nopass
SSHPasswordless SSH via public keyAccessiblessh <ip|alias> sudo ls no error

Install

Use the following to automatically install the Pigsty source package to ~/pigsty (recommended). Deployment dependencies (Ansible) are auto-installed.

curl -fsSL https://repo.pigsty.io/get | bash            # Install latest stable version
curl -fsSL https://repo.pigsty.cc/get | bash            # Backup mirror
curl -fsSL https://repo.pigsty.io/get | bash -s v4.0.0  # Install specific version

If you prefer not to run remote scripts, manually download or clone the source. When using git, always checkout a specific version before use:

git clone https://github.com/pgsty/pigsty; cd pigsty;
git checkout v4.0.0;  # Always checkout a specific version when using git

For manual download/clone, additionally run bootstrap to manually install Ansible and other dependencies, or install them yourself:

./bootstrap           # Install ansible for subsequent deployment

Configure

In Pigsty, deployment details are defined by the configuration inventory—the pigsty.yml config file. Customize through declarative configuration.

Pigsty provides configure as an optional configuration wizard, generating a configuration inventory with good defaults based on your environment:

./configure -g                # Use wizard to generate config with random passwords

The generated config defaults to ~/pigsty/pigsty.yml. Review and customize before installation.

Many configuration templates are available for reference. You can skip the wizard and directly edit pigsty.yml:

./configure -c ha/full -g       # Use 4-node sandbox template
./configure -c ha/trio -g       # Use 3-node minimal HA template
./configure -c ha/dual -g -v 17 # Use 2-node semi-HA template with PG 17
./configure -c ha/simu -s       # Use 20-node production simulation, skip IP check, no random passwords
Example configure output
vagrant@meta:~/pigsty$ ./configure
configure pigsty v4.0.0 begin
[ OK ] region = china
[ OK ] kernel  = Linux
[ OK ] machine = x86_64
[ OK ] package = deb,apt
[ OK ] vendor  = ubuntu (Ubuntu)
[ OK ] version = 22 (22.04)
[ OK ] sudo = vagrant ok
[ OK ] ssh = [email protected] ok
[WARN] Multiple IP address candidates found:
    (1) 192.168.121.38	    inet 192.168.121.38/24 metric 100 brd 192.168.121.255 scope global dynamic eth0
    (2) 10.10.10.10	    inet 10.10.10.10/24 brd 10.10.10.255 scope global eth1
[ OK ] primary_ip = 10.10.10.10 (from demo)
[ OK ] admin = [email protected] ok
[ OK ] mode = meta (ubuntu22.04)
[ OK ] locale  = C.UTF-8
[ OK ] ansible = ready
[ OK ] pigsty configured
[WARN] don't forget to check it and change passwords!
proceed with ./deploy.yml

The wizard only replaces the current node’s IP (use -s to skip replacement). For multi-node deployments, replace other node IPs manually. Also customize the config as needed—modify default passwords, add nodes, etc.

Common configure parameters:

ParameterDescription
-c|--confSpecify config template relative to conf/, without .yml suffix
-v|--versionPostgreSQL major version: 13, 14, 15, 16, 17, 18
-r|--regionUpstream repo region for faster downloads: default|china|europe
-n|--non-interactiveUse CLI params for primary IP, skip interactive wizard
-x|--proxyConfigure proxy_env from current environment variables

If your machine has multiple IPs, explicitly specify one with -i|--ip <ipaddr> or provide it interactively. The script replaces IP placeholder 10.10.10.10 with the current node’s primary IPv4. Use a static IP; never use public IPs.

Generated config is at ~/pigsty/pigsty.yml. Review and modify before installation.


Deploy

Pigsty’s deploy.yml playbook applies the configuration blueprint to all target nodes.

./deploy.yml     # Deploy everything on all nodes at once
Example deployment output
......

TASK [pgsql : pgsql init done] *************************************************
ok: [10.10.10.11] => {
    "msg": "postgres://10.10.10.11/postgres | meta  | dbuser_meta dbuser_view "
}
......

TASK [pg_monitor : load grafana datasource meta] *******************************
changed: [10.10.10.11]

PLAY RECAP *********************************************************************
10.10.10.11                : ok=302  changed=232  unreachable=0    failed=0    skipped=65   rescued=0    ignored=1
localhost                  : ok=6    changed=3    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

When output ends with pgsql init done, PLAY RECAP, etc., installation is complete!



Interface

Assuming the 4-node deployment template, your Pigsty environment should have a structure like:

IDNODEPGSQLINFRAETCD
110.10.10.10pg-meta-1infra-1etcd-1
210.10.10.11pg-test-1--
310.10.10.12pg-test-2--
410.10.10.13pg-test-3--

The INFRA module provides a graphical management interface via browser, accessible through Nginx’s 80/443 ports.

The PGSQL module provides a PostgreSQL database server on port 5432, also accessible via Pgbouncer/HAProxy proxies.

For production multi-node HA PostgreSQL clusters, use service access for automatic traffic routing.


More

After installation, explore the WebUI and access PostgreSQL service via port 5432.

Deploy and monitor more clusters—add definitions to the configuration inventory and run:

bin/node-add   pg-test      # Add pg-test cluster's 3 nodes to Pigsty management
bin/pgsql-add  pg-test      # Initialize a 3-node pg-test HA PG cluster
bin/redis-add  redis-ms     # Initialize Redis cluster: redis-ms

Most modules require the NODE module first. See available modules:

PGSQL, INFRA, NODE, ETCD, MINIO, REDIS, FERRET, DOCKER

5.2 - Prepare Resources for Serious Deployment

Production deployment preparation including hardware, nodes, disks, network, VIP, domain, software, and filesystem requirements.

Pigsty runs on nodes (physical machines or VMs). This document covers the planning and preparation required for deployment.


Node

Pigsty currently runs on Linux kernel with x86_64 / aarch64 architecture. A “node” refers to an SSH accessible resource that provides a bare Linux OS environment. It can be a physical machine, virtual machine, or a systemd-enabled container equipped with systemd, sudo, and sshd.

Deploying Pigsty requires at least 1 node. You can prepare more and deploy everything in one pass via playbooks, or add nodes later. The minimum spec requirement is 1C1G, but at least 1C2G is recommended. Higher is better—no upper limit. Parameters are auto-tuned based on available resources.

The number of nodes you need depends on your requirements. See Architecture Planning for details. Although a single-node deployment with external backup provides reasonable recovery guarantees, we recommend multiple nodes for production. A functioning HA setup requires at least 3 nodes; 2 nodes provide Semi-HA.


Disk

Pigsty uses /data as the default data directory. If you have a dedicated data disk, mount it there. Use /data1, /data2, /dataN for additional disk drives.

To use a different data directory, configure these parameters:

NameDescriptionDefault
node_dataNode main data directory/data
pg_fs_mainPG main data directory/data/postgres
pg_fs_backupPG backup directory/data/backups
etcd_dataETCD data directory/data/etcd
infra_dataInfra data directory/data/infra
nginx_dataNginx data directory/data/nginx
minio_dataMinIO data directory/data/minio
redis_fs_mainRedis data directory/data/redis

Filesystem

You can use any supported Linux filesystem for data disks. For production, we recommend xfs.

xfs is a Linux standard with excellent performance and CoW capabilities for instant large database cluster cloning. MinIO requires xfs. ext4 is another viable option with a richer data recovery tool ecosystem, but lacks CoW. zfs provides RAID and snapshot features but with significant performance overhead and requires separate installation.

Choose among these three based on your needs. Avoid NFS for database services.

Pigsty assumes /data is owned by root:root with 755 permissions. Admins can assign ownership for first-level directories; each application runs with a dedicated user in its subdirectory. See FHS for the directory structure reference.


Network

Pigsty defaults to online installation mode, requiring outbound Internet access. Offline installation eliminates the Internet requirement.

Internally, Pigsty requires a static network. Assign a fixed IPv4 address to each node.

The IP address serves as the node’s unique identifier—the primary IP bound to the main network interface for internal communications.

For single-node deployment without a fixed IP, use the loopback address 127.0.0.1 as a workaround.


VIP

Pigsty supports optional L2 VIP for NODE clusters (keepalived) and PGSQL clusters (vip-manager).

To use L2 VIP, you must explicitly assign an L2 VIP address for each node/database cluster. This is straightforward on your own hardware but may be challenging in public cloud environments.


CA

Pigsty generates a self-signed CA infrastructure for each deployment, issuing all encryption certificates.

If you have an existing enterprise CA or self-signed CA, you can use it to issue the certificates Pigsty requires.


Domain

Pigsty uses a local static domain i.pigsty by default for WebUI access. This is optional—IP addresses work too.

For production, domain names are recommended to enable HTTPS and encrypted data transmission. Domains also allow multiple services on the same port, differentiated by domain name.

For Internet-facing deployments, use public DNS providers (Cloudflare, AWS Route53, etc.) to manage resolution. Point your domain to the Pigsty node’s public IP address. For LAN/office network deployments, use internal DNS servers with the node’s internal IP address.

For local-only access, add the following to /etc/hosts on machines accessing the Pigsty WebUI:

10.10.10.10 i.pigsty    # Replace with your domain and Pigsty node IP

Linux

Pigsty runs on Linux. It supports 14 mainstream distributions: Compatible OS List

We recommend RockyLinux 10.0, Debian 13.2, or Ubuntu 24.04.2 as default options.

On macOS and Windows, use VM software or Docker systemd images to run Pigsty.

We strongly recommend a fresh OS installation. If your server already runs Nginx, PostgreSQL, or similar services, consider deploying on new nodes.


Locale

We recommend setting en_US as the primary OS language, or at minimum ensuring this locale is available, so PostgreSQL logs are in English.

Some distributions (e.g., Debian) may not provide the en_US locale by default. Enable it with:

localedef -i en_US -f UTF-8 en_US.UTF-8
localectl set-locale LANG=en_US.UTF-8

For PostgreSQL, we strongly recommend using the built-in C.UTF-8 collation (PG 17+) as the default.

The configuration wizard automatically sets C.UTF-8 as the collation when PG version and OS support are detected.


Ansible

Pigsty uses Ansible to control all managed nodes from the admin node. See Installing Ansible for details.

Pigsty installs Ansible on Infra nodes by default, making them usable as admin nodes (or backup admin nodes). For single-node deployment, the installation node serves as both the admin node running Ansible and the INFRA node hosting infrastructure.


Pigsty

You can install the latest stable Pigsty source with:

curl -fsSL https://repo.pigsty.io/get | bash;         # International
curl -fsSL https://repo.pigsty.cc/get | bash;         # Backup Mirror

To install a specific version, use the -s <version> parameter:

curl -fsSL https://repo.pigsty.io/get | bash -s v4.0.0
curl -fsSL https://repo.pigsty.cc/get | bash -s v4.0.0

To install the latest beta version:

curl -fsSL https://repo.pigsty.io/beta | bash;
curl -fsSL https://repo.pigsty.cc/beta | bash;

For developers or the latest development version, clone the repository directly:

git clone https://github.com/pgsty/pigsty.git;
cd pigsty; git checkout v4.0.0

If your environment lacks Internet access, download the source tarball from GitHub Releases or the Pigsty repository:

wget https://repo.pigsty.io/src/pigsty-v4.0.0.tgz
wget https://repo.pigsty.cc/src/pigsty-v4.0.0.tgz

5.3 - Planning Architecture and Nodes

How many nodes? Which modules need HA? How to plan based on available resources and requirements?

Pigsty uses a modular architecture. You can combine modules like building blocks and express your intent through declarative configuration.

Common Patterns

Here are common deployment patterns for reference. Customize based on your requirements:

PatternINFRAETCDPGSQLMINIODescription
Single-node (meta)111Single-node deployment default
Slim deploy (slim)11Database only, no monitoring infra
Infra-only (infra)1Monitoring infrastructure only
Rich deploy (rich)1111Single-node + object storage + local repo with all extensions
Multi-node PatternINFRAETCDPGSQLMINIODescription
Two-node (dual)112Semi-HA, tolerates specific node failure
Three-node (trio)333Standard HA, tolerates any one failure
Four-node (full)111+3Demo setup, single INFRA/ETCD
Production (simu)23nn2 INFRA, 3 ETCD
Large-scale (custom)35nn3 INFRA, 5 ETCD

Your architecture choice depends on reliability requirements and available resources. Serious production deployments require at least 3 nodes for HA configuration. With only 2 nodes, use Semi-HA configuration.


Trade-offs

  • Pigsty monitoring requires at least 1 INFRA node. Production typically uses 2; large-scale deployments use 3.
  • PostgreSQL HA requires at least 1 ETCD node. Production typically uses 3; large-scale uses 5. Must be odd numbers.
  • Object storage (MinIO) requires at least 1 MINIO node. Production typically uses 4+ nodes in MNMD clusters.
  • Production PG clusters typically use at least two-node primary-replica configuration; serious deployments use 3 nodes; high read loads can have dozens of replicas.
  • For PostgreSQL, you can also use advanced configurations: offline instances, sync instances, standby clusters, delayed clusters, etc.

Single-Node Setup

The simplest configuration with everything on a single node. Installs four essential modules by default. Typically used for demos, devbox, or testing.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1

With an external S3/MinIO backup repository providing RTO/RPO guarantees, this configuration works for standard production environments.

Single-node variants:


Two-Node Setup

Two-node configuration enables database replication and Semi-HA capability with better data redundancy and limited failover support:

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1 (replica)infra-1etcd-1
2node-2pg-meta-2 (primary)

Two-node HA auto-failover has limitations. This “Semi-HA” setup only auto-recovers from specific node failures:

  • If node-1 fails: No automatic failover—requires manual promotion of node-2
  • If node-2 fails: Automatic failover works—node-1 auto-promoted

Three-Node Setup

Three-node template provides true baseline HA configuration, tolerating any single node failure with automatic recovery.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-meta-2infra-2etcd-2
3node-3pg-meta-3infra-3etcd-3

Four-Node Setup

Pigsty Sandbox uses the standard four-node configuration.

IDNODEPGSQLINFRAETCD
1node-1pg-meta-1infra-1etcd-1
2node-2pg-test-1
3node-3pg-test-2
4node-4pg-test-3

For demo purposes, INFRA / ETCD modules aren’t configured for HA. You can adjust further:

IDNODEPGSQLINFRAETCDMINIO
1node-1pg-meta-1infra-1etcd-1minio-1
2node-2pg-test-1infra-2etcd-2
3node-3pg-test-2etcd-3
4node-4pg-test-3

More Nodes

With proper virtualization infrastructure or abundant resources, you can use more nodes for dedicated deployment of each module, achieving optimal reliability, observability, and performance.

IDNODEINFRAETCDMINIOPGSQL
110.10.10.10infra-1pg-meta-1
210.10.10.11infra-2pg-meta-2
310.10.10.21etcd-1
410.10.10.22etcd-2
510.10.10.23etcd-3
610.10.10.31minio-1
710.10.10.32minio-2
810.10.10.33minio-3
910.10.10.34minio-4
1010.10.10.40pg-src-1
1110.10.10.41pg-src-2
1210.10.10.42pg-src-3
1310.10.10.50pg-test-1
1410.10.10.51pg-test-2
1510.10.10.52pg-test-3
16……

5.4 - Setup Admin User and Privileges

Admin user, sudo, SSH, accessibility verification, and firewall configuration

Pigsty requires an OS admin user with passwordless SSH and Sudo privileges on all managed nodes.

This user must be able to SSH to all managed nodes and execute sudo commands on them.


User

Typically use names like dba or admin, avoiding root and postgres:

  • Using root for deployment is possible but not a production best practice.
  • Using postgres (pg_dbsu) as admin user is strictly prohibited.

Passwordless

The passwordless requirement is optional if you can accept entering a password for every ssh and sudo command.

Use -k|--ask-pass when running playbooks to prompt for SSH password, and -K|--ask-become-pass to prompt for sudo password.

./deploy.yml -k -K

Some enterprise security policies may prohibit passwordless ssh or sudo. In such cases, use the options above, or consider configuring a sudoers rule with a longer password cache time to reduce password prompts.


Create Admin User

Typically, your server/VM provider creates an initial admin user.

If unsatisfied with that user, Pigsty’s deployment playbook can create a new admin user for you.

Assuming you have root access or an existing admin user on the node, create an admin user with Pigsty itself:

./node.yml -k -K -t node_admin -e ansible_user=[your_existing_admin]

This leverages the existing admin to create a new one—a dedicated dba (uid=88) user described by these parameters, with sudo/ssh properly configured:

NameDescriptionDefault
node_admin_enabledEnable node admin usertrue
node_admin_uidNode admin user UID88
node_admin_usernameNode admin usernamedba

Sudo

All admin users should have sudo privileges on all managed nodes, preferably with passwordless execution.

To configure an admin user with passwordless sudo from scratch, edit/create a sudoers file (assuming username vagrant):

echo '%vagrant ALL=(ALL) NOPASSWD: ALL' | sudo tee /etc/sudoers.d/vagrant

For admin user dba, the /etc/sudoers.d/dba content should be:

%dba ALL=(ALL) NOPASSWD: ALL

If your security policy prohibits passwordless sudo, remove the NOPASSWD: part:

%dba ALL=(ALL) ALL

Ansible relies on sudo to execute commands with root privileges on managed nodes. In environments where sudo is unavailable (e.g., inside Docker containers), install sudo first.


SSH

Your current user should have passwordless SSH access to all managed nodes as the corresponding admin user.

Your current user can be the admin user itself, but this isn’t required—as long as you can SSH as the admin user.

SSH configuration is Linux 101, but here are the basics:

Generate SSH Key

If you don’t have an SSH key pair, generate one:

ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa -q

Pigsty will do this for you during the bootstrap stage if you lack a key pair.

Copy SSH Key

Distribute your generated public key to remote (and local) servers, placing it in the admin user’s ~/.ssh/authorized_keys file on all nodes. Use the ssh-copy-id utility:

ssh-copy-id <ip>                        # Interactive password entry
sshpass -p <password> ssh-copy-id <ip>  # Non-interactive (use with caution)

Using Alias

When direct SSH access is unavailable (jumpserver, non-standard port, different credentials), configure SSH aliases in ~/.ssh/config:

Host meta
    HostName 10.10.10.10
    User dba                      # Different user on remote
    IdentityFile /etc/dba/id_rsa  # Non-standard key
    Port 24                       # Non-standard port

Reference the alias in the inventory using ansible_host for the real SSH alias:

nodes:
  hosts:          # If node `10.10.10.10` requires SSH alias `meta`
    10.10.10.10: { ansible_host: meta }  # Access via `ssh meta`

SSH parameters work directly in Ansible. See Ansible Inventory Guide for details. This technique enables accessing nodes in private networks via jumpservers, or using different ports and credentials, or using your local laptop as an admin node.


Check Accessibility

You should be able to passwordlessly ssh from the admin node to all managed nodes as your current user. The remote user (admin user) should have privileges to run passwordless sudo commands.

To verify passwordless ssh/sudo works, run this command on the admin node for all managed nodes:

ssh <ip|alias> 'sudo ls'

If there’s no password prompt or error, passwordless ssh/sudo is working as expected.


Firewall

Production deployments typically require firewall configuration to block unauthorized port access.

By default, block inbound access from office/Internet networks except:

  • SSH port 22 for node access
  • HTTP (80) / HTTPS (443) for WebUI services
  • PostgreSQL port 5432 for database access

If accessing PostgreSQL via other ports, allow them accordingly. See used ports for the complete port list.

  • 5432: PostgreSQL database
  • 6432: Pgbouncer connection pooler
  • 5433: PG primary service
  • 5434: PG replica service
  • 5436: PG default service
  • 5438: PG offline service

5.5 - Sandbox

4-node sandbox environment for learning, testing, and demonstration

Pigsty provides a standard 4-node sandbox environment for learning, testing, and feature demonstration.

The sandbox uses fixed IP addresses and predefined identity identifiers, making it easy to reproduce various demo use cases.


Description

The default sandbox environment consists of 4 nodes, using the ha/full.yml configuration template.

IDIP AddressNodePostgreSQLINFRAETCDMINIO
110.10.10.10metapg-meta-1infra-1etcd-1minio-1
210.10.10.11node-1pg-test-1
310.10.10.12node-2pg-test-2
410.10.10.13node-3pg-test-3

The sandbox configuration can be summarized as the following config:

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq:  1 } }, vars: { etcd_cluster: etcd } }
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:  { pg_cluster: pg-meta }

    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica }
        10.10.10.13: { pg_seq: 3, pg_role: replica }
      vars: { pg_cluster: pg-test }

  vars:
    version: v4.0.0
    admin_ip: 10.10.10.10
    region: default
    pg_version: 18

pigsty-sandbox

PostgreSQL Clusters

The sandbox comes with a single-instance PostgreSQL cluster pg-meta on the meta node:

10.10.10.10 meta pg-meta-1
10.10.10.2  pg-meta          # Optional L2 VIP

There’s also a 3-instance PostgreSQL HA cluster pg-test deployed on the other three nodes:

10.10.10.11 node-1 pg-test-1
10.10.10.12 node-2 pg-test-2
10.10.10.13 node-3 pg-test-3
10.10.10.3  pg-test          # Optional L2 VIP

Two optional L2 VIPs are bound to the primary instances of pg-meta and pg-test clusters respectively.

Infrastructure

The meta node also hosts:

  • ETCD cluster: Single-node etcd cluster providing DCS service for PostgreSQL HA
  • MinIO cluster: Single-node minio cluster providing S3-compatible object storage
10.10.10.10 etcd-1
10.10.10.10 minio-1

Creating Sandbox

Pigsty provides out-of-the-box templates. You can use Vagrant to create a local sandbox, or use Terraform to create a cloud sandbox.

Local Sandbox (Vagrant)

Local sandbox uses VirtualBox/libvirt to create local virtual machines, running free on your Mac / PC.

To run the full 4-node sandbox, your machine should have at least 4 CPU cores and 8GB memory.

cd ~/pigsty
make full       # Create 4-node sandbox with default RockyLinux 9 image
make full9      # Create 4-node sandbox with RockyLinux 9
make full12     # Create 4-node sandbox with Debian 12
make full24     # Create 4-node sandbox with Ubuntu 24.04

For more details, please refer to Vagrant documentation.

Cloud Sandbox (Terraform)

Cloud sandbox uses public cloud API to create virtual machines. Easy to create and destroy, pay-as-you-go, ideal for quick testing.

Use spec/aliyun-full.tf template to create a 4-node sandbox on Alibaba Cloud:

cd ~/pigsty/terraform
cp spec/aliyun-full.tf terraform.tf
terraform init
terraform apply

For more details, please refer to Terraform documentation.


Other Specs

Besides the standard 4-node sandbox, Pigsty also provides other environment specs:

Single Node Devbox (meta)

The simplest 1-node environment for quick start, development, and testing:

make meta       # Create single-node devbox

Two Node Environment (dual)

2-node environment for testing primary-replica replication:

make dual       # Create 2-node environment

Three Node Environment (trio)

3-node environment for testing basic high availability:

make trio       # Create 3-node environment

Production Simulation (simu)

20-node large simulation environment for full production environment testing:

make simu       # Create 20-node production simulation environment

This environment includes:

  • 3 infrastructure nodes (meta1, meta2, meta3)
  • 2 HAProxy proxy nodes
  • 4 MinIO nodes
  • 5 ETCD nodes
  • 6 PostgreSQL nodes (2 clusters, 3 nodes each)

5.6 - Vagrant

Create local virtual machine environment with Vagrant

Vagrant is a popular local virtualization tool that creates local virtual machines in a declarative manner.

Pigsty requires a Linux environment to run. You can use Vagrant to easily create Linux virtual machines locally for testing.


Quick Start

Install Dependencies

First, ensure you have Vagrant and a virtual machine provider (such as VirtualBox or libvirt) installed on your system.

On macOS, you can use Homebrew for one-click installation:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install vagrant virtualbox ansible

On Linux, you can use VirtualBox or vagrant-libvirt as the VM provider.

Create Virtual Machines

Use the Pigsty-provided make shortcuts to create virtual machines:

cd ~/pigsty

make meta       # 1 node devbox for quick start, development, and testing
make full       # 4 node sandbox for HA testing and feature demonstration
make simu       # 20 node simubox for production environment simulation

# Other less common specs
make dual       # 2 node environment
make trio       # 3 node environment
make deci       # 10 node environment

You can use variant aliases to specify different operating system images:

make meta9      # Create single node with RockyLinux 9
make full12     # Create 4-node sandbox with Debian 12
make simu24     # Create 20-node simubox with Ubuntu 24.04

Available OS suffixes: 7 (EL7), 8 (EL8), 9 (EL9), 10 (EL10), 11 (Debian 11), 12 (Debian 12), 13 (Debian 13), 20 (Ubuntu 20.04), 22 (Ubuntu 22.04), 24 (Ubuntu 24.04)

Build Environment

You can also use the following aliases to create Pigsty build environments. These templates won’t replace the base image:

make oss        # 3 node OSS build environment
make pro        # 5 node PRO build environment
make rpm        # 3 node EL7/8/9 build environment
make deb        # 5 node Debian11/12 Ubuntu20/22/24 build environment
make all        # 7 node full build environment

Spec Templates

Pigsty provides multiple predefined VM specs in the vagrant/spec/ directory:

TemplateNodesSpecDescriptionAlias
meta.rb1 node2c4g x 1Single-node devboxDevbox
dual.rb2 nodes1c2g x 2Two-node environment
trio.rb3 nodes1c2g x 3Three-node environment
full.rb4 nodes2c4g + 1c2g x 34-node full sandboxSandbox
deci.rb10 nodesMixed10-node environment
simu.rb20 nodesMixed20-node production simuboxSimubox
minio.rb4 nodes1c2g x 4 + diskMinIO test environment
oss.rb3 nodes1c2g x 33-node OSS build environment
pro.rb5 nodes1c2g x 55-node PRO build environment
rpm.rb3 nodes1c2g x 33-node EL build environment
deb.rb5 nodes1c2g x 55-node Deb build environment
all.rb7 nodes1c2g x 77-node full build environment

Each spec file contains a Specs variable describing the VM nodes. For example, full.rb contains the 4-node sandbox definition:

# full: pigsty full-featured 4-node sandbox for HA-testing & tutorial & practices

Specs = [
  { "name" => "meta"   , "ip" => "10.10.10.10" ,  "cpu" => "2" ,  "mem" => "4096" ,  "image" => "bento/rockylinux-9" },
  { "name" => "node-1" , "ip" => "10.10.10.11" ,  "cpu" => "1" ,  "mem" => "2048" ,  "image" => "bento/rockylinux-9" },
  { "name" => "node-2" , "ip" => "10.10.10.12" ,  "cpu" => "1" ,  "mem" => "2048" ,  "image" => "bento/rockylinux-9" },
  { "name" => "node-3" , "ip" => "10.10.10.13" ,  "cpu" => "1" ,  "mem" => "2048" ,  "image" => "bento/rockylinux-9" },
]

simu Spec Details

simu.rb provides a 20-node production environment simulation configuration:

  • 3 x infra nodes (meta1-3): 4c16g
  • 2 x haproxy nodes (proxy1-2): 1c2g
  • 4 x minio nodes (minio1-4): 1c2g
  • 5 x etcd nodes (etcd1-5): 1c2g
  • 6 x pgsql nodes (pg-src-1-3, pg-dst-1-3): 2c4g

Config Script

Use the vagrant/config script to generate the final Vagrantfile based on spec and options:

cd ~/pigsty
vagrant/config [spec] [image] [scale] [provider]

# Examples
vagrant/config meta                # Use 1-node spec with default EL9 image
vagrant/config dual el9            # Use 2-node spec with EL9 image
vagrant/config trio d12 2          # Use 3-node spec with Debian 12, double resources
vagrant/config full u22 4          # Use 4-node spec with Ubuntu 22, 4x resources
vagrant/config simu u24 1 libvirt  # Use 20-node spec with Ubuntu 24, libvirt provider

Image Aliases

The config script supports various image aliases:

DistroAliasVagrant Box
CentOS 7el7, 7, centosgeneric/centos7
Rocky 8el8, 8, rocky8bento/rockylinux-9
Rocky 9el9, 9, rocky9, elbento/rockylinux-9
Rocky 10el10, rocky10rockylinux/10
Debian 11d11, 11, debian11generic/debian11
Debian 12d12, 12, debian12generic/debian12
Debian 13d13, 13, debian13cloud-image/debian-13
Ubuntu 20.04u20, 20, ubuntu20generic/ubuntu2004
Ubuntu 22.04u22, 22, ubuntu22, ubuntugeneric/ubuntu2204
Ubuntu 24.04u24, 24, ubuntu24bento/ubuntu-24.04

Resource Scaling

You can use the VM_SCALE environment variable to adjust the resource multiplier (default is 1):

VM_SCALE=2 vagrant/config meta     # Double the CPU/memory resources for meta spec

For example, using VM_SCALE=4 with the meta spec will adjust the default 2c4g to 8c16g:

Specs = [
  { "name" => "meta" , "ip" => "10.10.10.10", "cpu" => "8" , "mem" => "16384" , "image" => "bento/rockylinux-9" },
]

VM Management

Pigsty provides a set of Makefile shortcuts for managing virtual machines:

make           # Equivalent to make start
make new       # Destroy existing VMs and create new ones
make ssh       # Write VM SSH config to ~/.ssh/ (must run after creation)
make dns       # Write VM DNS records to /etc/hosts (optional)
make start     # Start VMs and configure SSH (up + ssh)
make up        # Start VMs with vagrant up
make halt      # Shutdown VMs (alias: down, dw)
make clean     # Destroy VMs (alias: del, destroy)
make status    # Show VM status (alias: st)
make pause     # Pause VMs (alias: suspend)
make resume    # Resume VMs
make nuke      # Destroy all VMs and volumes with virsh (libvirt only)
make info      # Show libvirt info (VMs, networks, storage volumes)

SSH Keys

Pigsty Vagrant templates use your ~/.ssh/id_rsa[.pub] as the SSH key for VMs by default.

Before starting, ensure you have a valid SSH key pair. If not, generate one with:

ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa -q

Supported Images

Pigsty currently uses the following Vagrant Boxes for testing:

# x86_64 / amd64
el8 :  bento/rockylinux-8     (libvirt, 202502.21.0, (amd64))
el9 :  bento/rockylinux-9     (libvirt, 202502.21.0, (amd64))
el10:  rockylinux/10          (libvirt)

d11 :  generic/debian11       (libvirt, 4.3.12, (amd64))
d12 :  generic/debian12       (libvirt, 4.3.12, (amd64))
d13 :  cloud-image/debian-13  (libvirt)

u20 :  generic/ubuntu2004     (libvirt, 4.3.12, (amd64))
u22 :  generic/ubuntu2204     (libvirt, 4.3.12, (amd64))
u24 :  bento/ubuntu-24.04     (libvirt, 20250316.0.0, (amd64))

For Apple Silicon (aarch64) architecture, fewer images are available:

# aarch64 / arm64
bento/rockylinux-9 (virtualbox, 202502.21.0, (arm64))
bento/ubuntu-24.04 (virtualbox, 202502.21.0, (arm64))

You can find more available Box images on Vagrant Cloud.


Environment Variables

You can use the following environment variables to control Vagrant behavior:

export VM_SPEC='meta'              # Spec name
export VM_IMAGE='bento/rockylinux-9'  # Image name
export VM_SCALE='1'                # Resource scaling multiplier
export VM_PROVIDER='virtualbox'    # Virtualization provider
export VAGRANT_EXPERIMENTAL=disks  # Enable experimental disk features

Notes

5.7 - Terraform

Create virtual machine environment on public cloud with Terraform

Terraform is a popular “Infrastructure as Code” tool that you can use to create virtual machines on public clouds with one click.

Pigsty provides Terraform templates for Alibaba Cloud, AWS, and Tencent Cloud as examples.


Quick Start

Install Terraform

On macOS, you can use Homebrew to install Terraform:

brew install terraform

For other platforms, refer to the Terraform Official Installation Guide.

Initialize and Apply

Enter the Terraform directory, select a template, initialize provider plugins, and apply the configuration:

cd ~/pigsty/terraform
cp spec/aliyun-meta.tf terraform.tf   # Select template
terraform init                         # Install cloud provider plugins (first use)
terraform apply                        # Generate execution plan and create resources

After running the apply command, type yes to confirm when prompted. Terraform will create VMs and related cloud resources for you.

Get IP Address

After creation, print the public IP address of the admin node:

terraform output | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'

Configure SSH Access

Use the ssh script to automatically configure SSH aliases and distribute keys:

./ssh    # Write SSH config to ~/.ssh/pigsty_config and copy keys

This script writes the IP addresses from Terraform output to ~/.ssh/pigsty_config and automatically distributes SSH keys using the default password PigstyDemo4.

After configuration, you can login directly using hostnames:

ssh meta    # Login using hostname instead of IP

Destroy Resources

After testing, you can destroy all created cloud resources with one click:

terraform destroy

Template Specs

Pigsty provides multiple predefined cloud resource templates in the terraform/spec/ directory:

Template FileCloud ProviderDescription
aliyun-meta.tfAlibaba CloudSingle-node meta template, supports all distros and AMD/ARM (default)
aliyun-meta-s3.tfAlibaba CloudSingle-node template + OSS bucket for PITR backup
aliyun-full.tfAlibaba Cloud4-node sandbox template, supports all distros and AMD/ARM
aliyun-oss.tfAlibaba Cloud5-node build template, supports all distros and AMD/ARM
aliyun-pro.tfAlibaba CloudMulti-distro test template for cross-OS testing
aws-cn.tfAWSAWS China region 4-node environment
tencentcloud.tfTencent CloudTencent Cloud 4-node environment

When using a template, copy the template file to terraform.tf:

cd ~/pigsty/terraform
cp spec/aliyun-full.tf terraform.tf   # Use Alibaba Cloud 4-node sandbox template
terraform init && terraform apply

Variable Configuration

Pigsty’s Terraform templates use variables to control architecture, OS distribution, and resource configuration:

Architecture and Distribution

variable "architecture" {
  description = "Architecture type (amd64 or arm64)"
  type        = string
  default     = "amd64"    # Comment this line to use arm64
  #default     = "arm64"   # Uncomment to use arm64
}

variable "distro" {
  description = "Distribution code (el8,el9,el10,u22,u24,d12,d13)"
  type        = string
  default     = "el9"       # Default uses Rocky Linux 9
}

Resource Configuration

The following resource parameters can be configured in the locals block:

locals {
  bandwidth        = 100                    # Public bandwidth (Mbps)
  disk_size        = 40                     # System disk size (GB)
  spot_policy      = "SpotWithPriceLimit"   # Spot policy: NoSpot, SpotWithPriceLimit, SpotAsPriceGo
  spot_price_limit = 5                      # Max spot price (only effective with SpotWithPriceLimit)
}

Alibaba Cloud Configuration

Credential Setup

Add your Alibaba Cloud credentials to environment variables, for example in ~/.bash_profile or ~/.zshrc:

export ALICLOUD_ACCESS_KEY="<your_access_key>"
export ALICLOUD_SECRET_KEY="<your_secret_key>"
export ALICLOUD_REGION="cn-shanghai"

Supported Images

The following are commonly used ECS Public OS Image prefixes in Alibaba Cloud:

DistroCodex86_64 Image Prefixaarch64 Image Prefix
CentOS 7.9el7centos_7_9_x64-
Rocky 8.10el8rockylinux_8_10_x64rockylinux_8_10_arm64
Rocky 9.6el9rockylinux_9_6_x64rockylinux_9_6_arm64
Rocky 10.0el10rockylinux_10_0_x64rockylinux_10_0_arm64
Debian 11.11d11debian_11_11_x64-
Debian 12.11d12debian_12_11_x64debian_12_11_arm64
Debian 13.2d13debian_13_2_x64debian_13_2_arm64
Ubuntu 20.04u20ubuntu_20_04_x64-
Ubuntu 22.04u22ubuntu_22_04_x64ubuntu_22_04_arm64
Ubuntu 24.04u24ubuntu_24_04_x64ubuntu_24_04_arm64
Anolis 8.9an8anolisos_8_9_x64-
Alibaba Cloud Linux 3al3aliyun_3_0_x64-

OSS Storage Configuration

The aliyun-meta-s3.tf template additionally creates an OSS bucket and related permissions for PostgreSQL PITR backup:

  • OSS Bucket: Creates a private bucket named pigsty-oss
  • RAM User: Creates a dedicated pigsty-oss-user user
  • Access Key: Generates AccessKey and saves to ~/pigsty.sk
  • IAM Policy: Grants full access to the bucket

AWS Configuration

Credential Setup

Set up AWS configuration and credential files:

# ~/.aws/config
[default]
region = cn-northwest-1

# ~/.aws/credentials
[default]
aws_access_key_id = <YOUR_AWS_ACCESS_KEY>
aws_secret_access_key = <AWS_ACCESS_SECRET>

If you need to use SSH keys, place the key files at:

~/.aws/pigsty-key
~/.aws/pigsty-key.pub

Tencent Cloud Configuration

Credential Setup

Add Tencent Cloud credentials to environment variables:

export TENCENTCLOUD_SECRET_ID="<your_secret_id>"
export TENCENTCLOUD_SECRET_KEY="<your_secret_key>"
export TENCENTCLOUD_REGION="ap-beijing"

Shortcut Commands

Pigsty provides some Makefile shortcuts for Terraform operations:

cd ~/pigsty/terraform

make u          # terraform apply -auto-approve + configure SSH
make d          # terraform destroy -auto-approve
make apply      # terraform apply (interactive confirmation)
make destroy    # terraform destroy (interactive confirmation)
make out        # terraform output
make ssh        # Run ssh script to configure SSH access
make r          # Reset terraform.tf to repository state

Notes

5.8 - Security

Security considerations for production Pigsty deployment

Pigsty’s default configuration is sufficient to cover the security needs of most scenarios.

Pigsty already provides out-of-the-box authentication and access control models that are secure enough for most scenarios.

pigsty-acl.jpg

If you want to further harden system security, here are some recommendations:


Confidentiality

Important Files

Protect your pigsty.yml configuration file or CMDB

  • The pigsty.yml configuration file usually contains highly sensitive confidential information. You should ensure its security.
  • Strictly control access permissions to admin nodes, limiting access to DBAs or Infra administrators only.
  • Strictly control access permissions to the pigsty.yml configuration file repository (if you manage it with git)

Protect your CA private key and other certificates, these files are very important.

  • Related files are generated by default in the files/pki directory under the Pigsty source directory on the admin node.
  • You should regularly back them up to a secure location.


Passwords

You MUST change these passwords when deploying to production, don’t use defaults!

If using MinIO, change the default MinIO user passwords and references in pgbackrest

If using remote backup repositories, enable backup encryption and set encryption passwords

  • Set [pgbackrest_repo.*.cipher_type](/docs/pgsql/param#pgbackrest_repo) to aes-256-cbc`
  • You can use ${pg_cluster} as part of the password to avoid all clusters using the same password

Use secure and reliable password encryption algorithms for PostgreSQL

  • Use pg_pwd_enc default value scram-sha-256 instead of legacy md5
  • This is the default behavior. Unless there’s a special reason (supporting legacy old clients), don’t change it back to md5

Use passwordcheck extension to enforce strong passwords

  • Add $lib/passwordcheck to pg_libs to enforce password policies.

Encrypt remote backups with encryption algorithms

  • Use repo_cipher_type in pgbackrest_repo backup repository definitions to enable encryption

Configure automatic password expiration for business users

  • You should set an automatic password expiration time for each business user to meet compliance requirements.

  • After configuring auto-expiration, don’t forget to regularly update these passwords during maintenance.

    - { name: dbuser_meta , password: Pleas3-ChangeThisPwd ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
    - { name: dbuser_view , password: Make.3ure-Compl1ance  ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
    - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
    - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [pg_monitor, dbrole_readonly]   ,comment: system replicator }
    - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
    - { name: dbuser_monitor ,roles: [pg_monitor] ,expire_in: 7300 ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    

Don’t log password change statements to postgres logs or other logs

SET log_statement TO 'none';
ALTER USER "{{ user.name }}" PASSWORD '{{ user.password }}';
SET log_statement TO DEFAULT;


IP Addresses

Bind specified IP addresses for postgres/pgbouncer/patroni, not all addresses.

  • The default pg_listen address is 0.0.0.0, meaning all IPv4 addresses.
  • Consider using pg_listen: '${ip},${vip},${lo}' to bind to specific IP address(es) for enhanced security.

Don’t expose any ports directly to public IP, except infrastructure egress Nginx ports (default 80/443)

  • For convenience, components like Prometheus/Grafana listen on all IP addresses by default and can be accessed directly via public IP ports
  • You can modify their configurations to listen only on internal IP addresses, restricting access through the Nginx portal via domain names only. You can also use security groups or firewall rules to implement these security restrictions.
  • For convenience, Redis servers listen on all IP addresses by default. You can modify redis_bind_address to listen only on internal IP addresses.

Use HBA to restrict postgres client access

  • There’s a security-enhanced configuration template: security.yml

Restrict patroni management access: only infra/admin nodes can call control APIs



Network Traffic

Use SSL and domain names to access infrastructure components through Nginx

Use SSL to protect Patroni REST API

  • patroni_ssl_enabled is disabled by default.
  • Because it affects health checks and API calls.
  • Note this is a global option; you must decide before deployment.

Use SSL to protect Pgbouncer client traffic

  • pgbouncer_sslmode defaults to disable
  • It has significant performance impact on Pgbouncer, so it’s disabled by default.

Integrity

Configure consistency-first mode for critical PostgreSQL database clusters (e.g., finance-related databases)

  • pg_conf database tuning template, using crit.yml will trade some availability for best data consistency.

Use crit node tuning template for better consistency.

  • node_tune host tuning template using crit can reduce dirty page ratio and lower data consistency risks.

Enable data checksums to detect silent data corruption.

  • pg_checksum defaults to off, but is recommended to enable.
  • When pg_conf = crit.yml is enabled, checksums are mandatory.

Log connection establishment/termination

  • This is disabled by default, but enabled by default in the crit.yml config template.
  • You can manually configure the cluster to enable log_connections and log_disconnections parameters.

Enable watchdog if you want to completely eliminate the possibility of split-brain during PG cluster failover

  • If your traffic goes through the recommended default HAProxy distribution, you won’t encounter split-brain even without watchdog.
  • If your machine hangs and Patroni is killed with kill -9, watchdog can serve as a fallback: automatic shutdown on timeout.
  • It’s best not to enable watchdog on infrastructure nodes.

Availability

Use sufficient nodes/instances for critical PostgreSQL database clusters

  • You need at least three nodes (able to tolerate one node failure) for production-grade high availability.
  • If you only have two nodes, you can tolerate specific standby node failures.
  • If you only have one node, use external S3/MinIO for cold backup and WAL archive storage.

For PostgreSQL, make trade-offs between availability and consistency

  • pg_rpo : Trade-off between availability and consistency
  • pg_rto : Trade-off between failure probability and impact

Don’t access databases directly via fixed IP addresses; use VIP, DNS, HAProxy, or combinations

  • Use HAProxy for service access
  • In case of failover/switchover, HAProxy will handle client traffic switching.

Use multiple infrastructure nodes in important production deployments (e.g., 1~3)

  • Small deployments or lenient scenarios can use a single infrastructure/admin node.
  • Large production deployments should have at least two infrastructure nodes as mutual backup.

Use sufficient etcd server instances, and use an odd number of instances (1,3,5,7)

6 - References

Detailed reference information and lists, including supported OS distros, available modules, monitor metrics, extensions, cost comparison and analysis, glossary

6.1 - Supported Linux

Pigsty compatible Linux OS distribution major versions and CPU architectures

Pigsty runs on Linux, supporting amd64/x86_64 and arm64/aarch64 arch, plus 3 major distros: EL, Debian, Ubuntu.

Pigsty runs bare-metal without containers. Supports latest 2 major releases for each of the 3 major distros across both archs.

Overview

Recommended OS versions: RockyLinux 10.0, Ubuntu 24.04, Debian 13.1.

DistroArchOS CodePG18PG17PG16PG15PG14PG13
RHEL / Rocky / Alma 10x86_64el10.x86_64
RHEL / Rocky / Alma 10aarch64el10.aarch64
Ubuntu 24.04 (noble)x86_64u24.x86_64
Ubuntu 24.04 (noble)aarch64u24.aarch64
Debian 13 (trixie)x86_64d13.x86_64
Debian 13 (trixie)aarch64d13.aarch64

EL

Pigsty supports RHEL / Rocky / Alma / Anolis / CentOS 8, 9, 10.

EL DistroArchOS CodePG18PG17PG16PG15PG14PG13
RHEL10 / Rocky10 / Alma10x86_64el10.x86_64
RHEL10 / Rocky10 / Alma10aarch64el10.aarch64
RHEL9 / Rocky9 / Alma9x86_64el9.x86_64
RHEL9 / Rocky9 / Alma9aarch64el9.aarch64
RHEL8 / Rocky8 / Alma8x86_64el8.x86_64
RHEL8 / Rocky8 / Alma8aarch64el8.aarch64
RHEL7 / CentOS7x86_64el7.x86_64
RHEL7 / CentOS7aarch64-

Ubuntu

Pigsty supports Ubuntu 24.04 / 22.04:

Ubuntu DistroArchOS CodePG18PG17PG16PG15PG14PG13
Ubuntu 24.04 (noble)x86_64u24.x86_64
Ubuntu 24.04 (noble)aarch64u24.aarch64
Ubuntu 22.04 (jammy)x86_64u22.x86_64
Ubuntu 22.04 (jammy)aarch64u22.aarch64
Ubuntu 20.04 (focal)x86_64u20.x86_64
Ubuntu 20.04 (focal)aarch64-

Debian

Pigsty supports Debian 12 / 13, latest Debian 13.1 recommended:

Debian DistroArchOS CodePG18PG17PG16PG15PG14PG13
Debian 13 (trixie)x86_64d13.x86_64
Debian 13 (trixie)aarch64d13.aarch64
Debian 12 (bookworm)x86_64d12.x86_64
Debian 12 (bookworm)aarch64d12.aarch64
Debian 11 (bullseye)x86_64d11.x86_64
Debian 11 (bullseye)aarch64-

Vagrant

For local VM deployment, use these Vagrant base images (same as used in Pigsty dev):


Terraform

For cloud deployment, use these Terraform base images (Aliyun example):

  • Rocky 8.10 : rockylinux_8_10_x64_20G_alibase_20240923.vhd
  • Rocky 9.6 : rockylinux_9_6_x64_20G_alibase_20250101.vhd
  • Ubuntu 22.04 : ubuntu_22_04_x64_20G_alibase_20240926.vhd
  • Ubuntu 24.04 : ubuntu_24_04_x64_20G_alibase_20240923.vhd
  • Debian 12.11 : debian_12_11_x64_20G_alibase_20241201.vhd
  • Debian 13 : debian_13_x64_20G_alibase_20250101.vhd

6.2 - Modules

This article lists available Pigsty modules and the current module planning.

Official Modules

ModuleCategoryStatusDocs PathSummary
PGSQLCoreGA/docs/pgsqlHigh-availability PostgreSQL clusters with built-in backup, monitoring, SOP, and extension ecosystem.
INFRACoreGA/docs/infraLocal software repository + VictoriaMetrics/Logs/Traces + Grafana infrastructure stack.
NODECoreGA/docs/nodeNode initialization and convergence: system tuning, admin, HAProxy, Vector, Docker, etc.
ETCDCoreGA/docs/etcdDCS for PostgreSQL HA (service discovery, config, leader-election metadata).
MINIOExtensionGA/docs/minioS3-compatible object storage, optionally used as PostgreSQL backup repository.
REDISExtensionGA/docs/redisRedis standalone/sentinel/cluster deployment and monitoring.
FERRETExtensionGA/docs/ferretFerretDB module (MONGO API compatibility) for MongoDB protocol access over PG.
DOCKERExtensionGA/docs/dockerDocker daemon and the runtime capability for containerized apps.
JUICEExtensionBETA/docs/juiceJuiceFS distributed file system using PostgreSQL as metadata engine.
VIBEExtensionBETA/docs/vibeBrowser-based dev environment with Code-Server, JupyterLab, Node.js, and Claude Code.

Core Modules

Pigsty provides four core modules that are important for delivering complete highly available PostgreSQL services:

  • PGSQL: Self-healing PostgreSQL clusters with HA, PITR, IaC, SOP, monitoring, and 444 extensions.
  • INFRA: Local software repository, Prometheus, Grafana, Loki, AlertManager, PushGateway, Blackbox Exporter…
  • NODE: Node convergence for hostname, timezone, NTP, ssh, sudo, haproxy, docker, vector, keepalived.
  • ETCD: Distributed key-value store used as DCS for HA PostgreSQL clusters: consensus leader election/config management/service discovery.

Although these four modules are usually installed together, separate use is still feasible. In practice, only the NODE module is usually mandatory.


Extension Modules

Pigsty provides six extension modules. They are not mandatory for core functionality, but can enhance PostgreSQL capabilities:

  • MINIO: S3-compatible object storage, optional PostgreSQL backup repository, with production deployment and monitoring support.
  • REDIS: Redis server with standalone/sentinel/cluster production deployment and full monitoring support.
  • MONGO: Native FerretDB deployment support, adding MongoDB wire-protocol compatible APIs to PostgreSQL.
  • DOCKER: Docker daemon service for one-click deployment of stateless software templates on Pigsty.
  • JUICE: JuiceFS distributed filesystem module using PostgreSQL as metadata engine, providing shared POSIX storage.
  • VIBE: Browser-based development environment with Code-Server, JupyterLab, Node.js, and Claude Code.

Ecosystem Modules

The modules below are closely related to the PostgreSQL ecosystem. They are optional ecosystem capabilities and are not counted in the 10 official modules above:

6.3 - Extensions

This page lists PostgreSQL extensions supported by Pigsty and their availability overview.

Pigsty extension package data is synchronized from ~/pgsty/pgext/content/list/pkg.md. For full details, see PGEXT.CLOUD.

There are currently 444 available PostgreSQL extensions, grouped into 372 packages.

TIME

ExtensionVersionCategoryDescription
timescaledb2.24.0TIMEEnables scalable inserts and complex queries for time-series data
timescaledb_toolkit1.22.0TIMELibrary of analytical hyperfunctions, time-series pipelining, and other SQL utilities
pg_timeseries0.2.0TIMEConvenience API for time series stack
periods1.2.3TIMEProvide Standard SQL functionality for PERIODs and SYSTEM VERSIONING
temporal_tables1.2.2TIMEtemporal tables
emaj4.7.1TIMEEnables fine-grained write logging and time travel on subsets of the database.
table_version1.11.1TIMEPostgreSQL table versioning extension
pg_cron1.6.7TIMEJob scheduler for PostgreSQL
pg_task1.0.0TIMEexecute any sql command at any specific time at background
pg_later0.4.0TIMERun queries now and get results later
pg_background1.5TIMERun SQL queries in the background

GIS

ExtensionVersionCategoryDescription
postgis3.6.1GISPostGIS geometry and geography spatial types and functions
pgrouting3.8.0GISpgRouting Extension
pointcloud1.2.5GISdata type for lidar point clouds
pg_h34.2.3GISH3 bindings for PostgreSQL
q3c2.0.1GISq3c sky indexing plugin
ogr_fdw1.1.7GISforeign-data wrapper for GIS data access
geoip0.3.0GISIP-based geolocation query
pg_polyline0.0.1GISFast Google Encoded Polyline encoding & decoding for postgres
pg_geohash1.0GISHandle geohash based functionality for spatial coordinates
mobilitydb1.3.0GISMobilityDB geospatial trajectory data management & analysis platform
pg_tzf0.2.3GISFast lookup timezone name by GPS coordinates
earthdistance1.2GIScalculate great-circle distances on the surface of the Earth

RAG

ExtensionVersionCategoryDescription
pgvector0.8.1RAGvector data type and ivfflat and hnsw access methods
vchord1.0.0RAGVector database plugin for Postgres, written in Rust
pgvectorscale0.9.0RAGAdvanced indexing for vector data with DiskANN
pg_vectorize0.26.0RAGThe simplest way to do vector search on Postgres
pg_similarity1.0RAGsupport similarity queries
smlar1.0RAGEffective similarity search
pg_summarize0.0.1RAGText Summarization using LLMs. Built using pgrx
pg_tiktoken0.0.1RAGtiktoken tokenizer for use with OpenAI models in postgres
pg4ml2.0RAGMachine learning framework for PostgreSQL
pgml2.10.0RAGRun AL/ML workloads with SQL interface

FTS

ExtensionVersionCategoryDescription
pg_search0.21.4FTSFull text search for PostgreSQL using BM25
pgroonga4.0.4FTSUse Groonga as index, fast full text search platform for all languages!
pg_bigm1.2FTScreate 2-gram (bigram) index for faster full text search.
zhparser2.3FTSa parser for full-text search of Chinese
pg_bestmatch0.0.2FTSGenerate BM25 sparse vector inside PostgreSQL
vchord_bm250.3.0FTSA postgresql extension for bm25 ranking algorithm
pg_tokenizer0.1.1FTSTokenizers for full-text search
pg_biscuit2.2.2FTSIAM-LIKE pattern matching with bitmap indexing
pg_textsearch0.4.0FTSFull-text search with BM25 ranking
hunspell_cs_cz1.0FTSCzech Hunspell Dictionary
hunspell_de_de1.0FTSGerman Hunspell Dictionary
hunspell_en_us1.0FTSen_US Hunspell Dictionary
hunspell_fr1.0FTSFrench Hunspell Dictionary
hunspell_ne_np1.0FTSNepali Hunspell Dictionary
hunspell_nl_nl1.0FTSDutch Hunspell Dictionary
hunspell_nn_no1.0FTSNorwegian (norsk) Hunspell Dictionary
hunspell_pt_pt1.0FTSPortuguese Hunspell Dictionary
hunspell_ru_ru1.0FTSRussian Hunspell Dictionary
hunspell_ru_ru_aot1.0FTSRussian Hunspell Dictionary (from AOT.ru group)
fuzzystrmatch1.2FTSdetermine similarities and distance between strings
pg_trgm1.6FTStext similarity measurement and index searching based on trigrams

OLAP

ExtensionVersionCategoryDescription
citus14.0.0OLAPDistributed PostgreSQL as an extension
hydra1.1.2OLAPHydra Columnar extension
pg_analytics0.3.7OLAPPostgres for analytics, powered by DuckDB
pg_duckdb1.1.1OLAPDuckDB Embedded in Postgres
pg_mooncake0.2.0OLAPColumnstore Table in Postgres
pg_clickhouse0.1.3OLAPInterfaces to query ClickHouse databases from PostgreSQL
duckdb_fdw1.1.2OLAPDuckDB Foreign Data Wrapper
pg_parquet0.5.1OLAPcopy data between Postgres and Parquet
pg_fkpart1.7.0OLAPTable partitioning by foreign key utility
pg_partman5.4.0OLAPExtension to manage partitioned tables by time or ID
plproxy2.11.0OLAPDatabase partitioning implemented as procedural language
pg_strom6.0OLAPPG-Strom - big-data processing acceleration using GPU and NVME
tablefunc1.0OLAPfunctions that manipulate whole tables, including crosstab

FEAT

ExtensionVersionCategoryDescription
age1.6.0FEATAGE graph database extension
hll2.19FEATtype for storing hyperloglog data
rum1.3.15FEATRUM index access method
pg_ai_query0.1.1FEATAI-powered SQL query generation for PostgreSQL
pg_ttl_index2.0.0FEATAutomatic data expiration with TTL indexes
pg_graphql1.5.12FEATAdd in-database GraphQL support
pg_jsonschema0.3.3FEATPostgreSQL extension providing JSON Schema validation
jsquery1.2FEATdata type for jsonb inspection
pg_hint_plan1.8.0FEATGive PostgreSQL ability to manually force some decisions in execution plans.
hypopg1.4.2FEATHypothetical indexes for PostgreSQL
index_advisor0.2.0FEATQuery index advisor
pg_plan_filter0.0.1FEATfilter statements by their execution plans.
imgsmlr1.0FEATImage similarity with haar
pg_ivm1.13FEATincremental view maintenance on PostgreSQL
pg_incremental1.2.0FEATIncremental Processing by Crunchy Data
pgmq1.9.0FEATA lightweight message queue. Like AWS SQS and RSMQ but on Postgres.
pgq3.5.1FEATGeneric queue for PostgreSQL
orioledb1.5FEATOrioleDB, the next generation transactional engine
pg_cardano1.1.1FEATA suite of Cardano-related tools
rdkit202503.1FEATCheminformatics functionality for PostgreSQL.
omnigres0.2.14FEATAdvanced adapter for Postgres extensions
bloom1.0FEATbloom access method - signature file based index

LANG

ExtensionVersionCategoryDescription
pg_tle1.5.2LANGTrusted Language Extensions for PostgreSQL
plv83.2.4LANGPL/JavaScript (v8) trusted procedural language
pljs1.0.4LANGPL/JS trusted procedural language
pllua2.0.12LANGLua as a procedural language
plprql18.0.0LANGUse PRQL in PostgreSQL - Pipelined Relational Query Language
pldebugger1.9LANGserver-side support for debugging PL/pgSQL functions
plpgsql_check2.8.5LANGextended check for plpgsql functions
plprofiler4.2.5LANGserver-side support for profiling PL/pgSQL functions
plsh1.20220917LANGPL/sh procedural language
pljava1.6.10LANGPL/Java procedural language
plr8.4.8LANGload R interpreter and execute R script from within a database
plxslt0.20140221LANGXSLT procedural language for PostgreSQL
pgtap1.3.4LANGUnit testing for PostgreSQL
faker0.5.3LANGWrapper for the Faker Python library
dbt20.61.7LANGOSDL-DBT-2 test kit
pltcl1.0LANGPL/Tcl procedural language
plperl1.0LANGPL/Perl procedural language
plperlu1.0LANGPL/PerlU untrusted procedural language
plpgsql1.0LANGPL/pgSQL procedural language
plpython3u1.0LANGPL/Python3U untrusted procedural language

TYPE

ExtensionVersionCategoryDescription
pg_prefix1.2.10TYPEPrefix Range module for PostgreSQL
pg_semver0.41.0TYPESemantic version data type
pgunit7.10TYPESI units extension
pgpdf0.1.0TYPEPDF type with meta admin & Full-Text Search
pglite_fusion0.0.6TYPEEmbed an SQLite database in your PostgreSQL table
md5hash1.0.1TYPEtype for storing 128-bit binary data inline
asn1oid1.6TYPEasn1oid extension
pg_roaringbitmap1.1.0TYPEsupport for Roaring Bitmaps
pgfaceting0.2.0TYPEfast faceting queries using an inverted index
pgsphere1.5.2TYPEspherical objects with useful functions, operators and index support
pg_country0.0.3TYPECountry data type, ISO 3166-1
pg_xenophile0.8.3TYPEMore than the bare necessities for PostgreSQL i18n and l10n.
pg_currency0.0.3TYPECustom PostgreSQL currency type in 1Byte
pgcollection1.1.0TYPEMemory optimized data type to be used inside of plpglsql func
pgmp1.0.5TYPEMultiple Precision Arithmetic extension
numeral1.3TYPEnumeral datatypes extension
pg_rational0.0.2TYPEbigint fractions
pguint1.20250815TYPEunsigned integer types
pg_uint1281.1.1TYPENative uint128 type
hashtypes0.1.5TYPEsha1, md5 and other data types for PostgreSQL
ip4r2.4.2TYPEIPv4/v6 and IPv4/v6 range index type for PostgreSQL
pg_duration1.0.2TYPEdata type for representing durations
pg_uri1.20151224TYPEURI Data type for PostgreSQL
pg_emailaddr0TYPEEmail address type for PostgreSQL
pg_acl1.0.4TYPEACL Data type
debversion1.2.0TYPEDebian version number data type
pg_rrule0.3.0TYPERRULE field type for PostgreSQL
timestamp91.4.0TYPEtimestamp nanosecond resolution
chkpass1.0TYPEdata type for auto-encrypted passwords
isn1.2TYPEdata types for international product numbering standards
seg1.4TYPEdata type for representing line segments or floating-point intervals
cube1.5TYPEdata type for multidimensional cubes
ltree1.3TYPEdata type for hierarchical tree-like structures
hstore1.8TYPEdata type for storing sets of (key, value) pairs
citext1.6TYPEdata type for case-insensitive character strings
xml21.1TYPEXPath querying and XSLT

UTIL

ExtensionVersionCategoryDescription
pg_gzip1.0.0UTILgzip and gunzip functions.
pg_bzip1.0.0UTILBzip compression and decompression
pg_zstd1.1.2UTILZstandard compression algorithm implementation in PostgreSQL
pg_http1.7.0UTILHTTP client for PostgreSQL, allows web page retrieval inside the database.
pg_net0.20.0UTILAsync HTTP Requests
pg_curl2.4.5UTILRun curl actions for data transfer in URL syntax
pg_retry1.0.0UTILRetry SQL statements on transient errors with exponential backoff
pgjq0.1.0UTILUse jq in Postgres
pgjwt0.2.0UTILJSON Web Token API for Postgresql
pg_smtp_client0.2.1UTILPostgreSQL extension to send email using SMTP
pg_html5_email_address1.2.3UTILPostgreSQL email validation that is consistent with the HTML5 spec
url_encode1.2.5UTILurl_encode, url_decode functions
pgsql_tweaks1.0.2UTILSome functions and views for daily usage
pg_extra_time2.0.0UTILSome date time functions and operators that,
pgpcre0.20190509UTILPerl Compatible Regular Expression functions
icu_ext1.10.0UTILAccess ICU functions
pgqr1.0UTILQR Code generator from PostgreSQL
pg_protobuf1.0UTILProtobuf support for PostgreSQL
pg_envvar1.0.1UTILFetch the value of an environment variable
floatfile1.3.1UTILSimple file storage for arrays of floats
pg_render0.1.3UTILRender HTML in SQL
pg_readme0.7.0UTILGenerate a README.md document for a database extension or schema
ddl_historization0.0.7UTILHistorize the ddl changes inside PostgreSQL database
data_historization1.1.0UTILPLPGSQL Script to historize data in partitionned table
pg_schedoc0.0.1UTILCross documentation between Django and DBT projects
pg_hashlib1.1UTILStable hash functions for Postgres
pg_xxhash0.0.1UTILxxhash functions for PostgreSQL
shacrypt1.1UTILImplements SHA256-CRYPT and SHA512-CRYPT password encryption schemes
cryptint1.0.0UTILEncryption functions for int and bigint values
pg_ecdsa1.0UTILuECC bindings for Postgres
pgsparql1.0UTILQuery SPARQL datasource with SQL

FUNC

ExtensionVersionCategoryDescription
pg_idkit0.4.0FUNCmulti-tool for generating new/niche universally unique identifiers (ex. UUIDv6, ULID, KSUID)
pgx_ulid0.2.2FUNCulid type and methods
pg_uuidv71.7.0FUNCCreate UUIDv7 values in postgres
permuteseq1.2.2FUNCPseudo-randomly permute sequences with a format-preserving encryption on elements
pg_hashids1.3FUNCShort unique id generator for PostgreSQL, using hashids
sequential_uuids1.0.3FUNCgenerator of sequential UUIDs
pg_typeid0.3.0FUNCAllows to use TypeIDs in Postgres natively
topn2.7.0FUNCtype for top-n JSONB
quantile1.1.8FUNCQuantile aggregation function
lower_quantile1.0.3FUNCLower quantile aggregate function
count_distinct3.0.2FUNCAn alternative to COUNT(DISTINCT …) aggregate, usable with HashAggregate
omnisketch1.0.2FUNCdata structure for on-line agg of data into approximate sketch
ddsketch1.0.1FUNCProvides ddsketch aggregate function
vasco0.1.0FUNCdiscover hidden correlations in your data with MIC
pgxicor0.1.0FUNCXI Correlation Coefficient in Postgres
pg_weighted_statistics1.0.0FUNCHigh-performance weighted statistics functions for sparse data
tdigest1.4.3FUNCProvides tdigest aggregate function.
first_last_agg0.1.4FUNCfirst() and last() aggregate functions
extra_window_functions1.0FUNCExtra Window Functions for PostgreSQL
floatvec1.1.1FUNCMath for vectors (arrays) of numbers
aggs_for_vecs1.4.0FUNCAggregate functions for array inputs
aggs_for_arrays1.3.3FUNCVarious functions for computing statistics on arrays of numbers
pg_csv1.0.1FUNCFlexible CSV processing for Postgres
pg_arraymath1.1FUNCArray math and operators that work element by element on the contents of arrays
pg_math1.0FUNCGSL statistical functions for postgresql
pg_random2.0.0FUNCrandom data generator
pg_base361.0.0FUNCInteger Base36 types
pg_base620.0.1FUNCBase62 extension for PostgreSQL
pg_base580.0.1FUNCBase58 Encoder/Decoder Extension for PostgreSQL
pg_financial1.0.1FUNCFinancial aggregate functions
pg_convert0.1.0FUNCconversion functions for spatial, routing and other specialized uses
refint1.0FUNCfunctions for implementing referential integrity (obsolete)
autoinc1.0FUNCfunctions for autoincrementing fields
insert_username1.0FUNCfunctions for tracking who changed a table
moddatetime1.0FUNCfunctions for tracking last modification time
tsm_system_time1.0FUNCTABLESAMPLE method which accepts time in milliseconds as a limit
dict_xsyn1.0FUNCtext search dictionary template for extended synonym processing
tsm_system_rows1.0FUNCTABLESAMPLE method which accepts number of rows as a limit
tcn1.0FUNCTriggered change notifications
uuid-ossp1.1FUNCgenerate universally unique identifiers (UUIDs)
btree_gist1.7FUNCsupport for indexing common datatypes in GiST
btree_gin1.3FUNCsupport for indexing common datatypes in GIN
intarray1.5FUNCfunctions, operators, and index support for 1-D arrays of integers
intagg1.1FUNCinteger aggregator and enumerator (obsolete)
dict_int1.0FUNCtext search dictionary template for integers
unaccent1.1FUNCtext search dictionary that removes accents

ADMIN

ExtensionVersionCategoryDescription
pg_repack1.5.3ADMINReorganize tables in PostgreSQL databases with minimal locks
pg_rewrite2.0.0ADMINTool allows read write to the table during the rewriting
pg_squeeze1.9.1ADMINA tool to remove unused space from a relation.
pg_dirtyread2.7ADMINRead dead but unvacuumed rows from table
pgfincore1.3.1ADMINexamine and manage the os buffer cache
pg_cooldown0.1ADMINremove buffered pages for specific relations
pg_ddlx0.30ADMINDDL eXtractor functions
pglinter1.0.1ADMINPostgreSQL Linting and Analysis Extension
pg_prioritize1.0.4ADMINget and set the priority of PostgreSQL backends
pg_checksums1.3ADMINActivate/deactivate/verify checksums in offline Postgres clusters
pg_readonly1.0.3ADMINcluster database read only
pgdd0.6.1ADMINIntrospect pg data dictionary via standard SQL
pg_permissions1.4ADMINview object permissions and compare them with the desired state
pgautofailover2.2ADMINpg_auto_failover
pg_catcheck1.6.0ADMINDiagnosing system catalog corruption
preprepare0.9ADMINPre Prepare your Statement server side
pg_upless0.0.3ADMINDetect Useless UPDATE
pgcozy1.0ADMINPre-warming shared buffers according to previous pg_buffercache snapshots for PostgreSQL.
pg_orphaned1.0ADMINDeal with orphaned files
pg_crash1.0ADMINSend random signals to random processes
pg_cheat_funcs1.0ADMINProvides cheat (but useful) functions
pg_fio1.0ADMINPostgreSQL File I/O Functions
pg_savior0.0.1ADMINPostgres extension to save OOPS mistakes
safeupdate1.5ADMINRequire criteria for UPDATE and DELETE
pg_drop_events0.1.0ADMINlogs transaction ids of drop table, drop column, drop materialized view statements
table_log0.6.4ADMINrecord table modification logs and PITR for table/row
pgagent4.2.3ADMINA PostgreSQL job scheduler
pg_prewarm1.2ADMINprewarm relation data
pgpool4.7.0ADMINAdministrative functions for pgPool
lo1.1ADMINLarge Object maintenance
basic_archive-ADMINan example of an archive module
basebackup_to_shell-ADMINadds a custom basebackup target called shell
old_snapshot1.0ADMINutilities in support of old_snapshot_threshold
adminpack2.1ADMINadministrative functions for PostgreSQL
amcheck1.4ADMINfunctions for verifying relation integrity
pg_surgery1.0ADMINextension to perform surgery on a damaged relation

STAT

ExtensionVersionCategoryDescription
pg_profile4.11STATPostgreSQL load profile repository and report builder
pg_tracing0.1.3STATDistributed Tracing for PostgreSQL
pg_show_plans2.1.7STATshow query plans of all currently running SQL statements
pg_stat_kcache2.3.1STATKernel statistics gathering
pg_stat_monitor2.3.1STATThe pg_stat_monitor is a PostgreSQL Query Performance Monitoring tool, based on PostgreSQL contrib module pg_stat_statements. pg_stat_monitor provides aggregated statistics, client information, plan details including plan, and histogram information.
pg_qualstats2.1.3STATAn extension collecting statistics about quals
pg_store_plans1.9STATtrack plan statistics of all SQL statements executed
pg_track_settings2.1.2STATTrack settings changes
pg_wait_sampling1.1.9STATsampling based statistics of wait events
pgsentinel1.3.1STATactive session history
system_stats3.2STATEnterpriseDB system statistics for PostgreSQL
pg_meta0.4.0STATNormalized, friendlier system catalog for PostgreSQL
pgnodemx1.7STATCapture node OS metrics via SQL queries
pg_sqlog1.6STATProvide SQL interface to logs
bgw_replstatus1.0.8STATSmall PostgreSQL background worker to report whether a node is a replication master or standby
pgmeminfo1.0.0STATshow memory usage
toastinfo1.5STATshow details on toasted datums
pg_explain_ui0.0.2STATeasily jump into a visual plan UI for any SQL query
pg_relusage0.0.1STATLog all the queries that reference a particular column
pagevis0.1STATVisualise database pages in ascii code
powa5.1.1STATPostgreSQL Workload Analyser-core
pg_overexplain1.0STATAllow EXPLAIN to dump even more details
pg_logicalinspect1.0STATLogical decoding components inspection
pageinspect1.12STATinspect the contents of database pages at a low level
pgrowlocks1.2STATshow row-level locking information
sslinfo1.2STATinformation about SSL certificates
pg_buffercache1.5STATexamine the shared buffer cache
pg_walinspect1.1STATfunctions to inspect contents of PostgreSQL Write-Ahead Log
pg_freespacemap1.2STATexamine the free space map (FSM)
pg_visibility1.2STATexamine the visibility map (VM) and page-level visibility info
pgstattuple1.5STATshow tuple-level statistics
auto_explain-STATProvides a means for logging execution plans of slow statements automatically
pg_stat_statements1.11STATtrack planning and execution statistics of all SQL statements executed

SEC

ExtensionVersionCategoryDescription
passwordcheck_cracklib3.1.0SECStrengthen PostgreSQL user password checks with cracklib
supautils3.0.2SECExtension that secures a cluster on a cloud environment
pgsodium3.1.9SECPostgres extension for libsodium functions
pg_vault0.3.1SECSupabase Vault Extension
pg_session_jwt0.4.0SECManage authentication sessions using JWTs
pg_anon2.5.1SECPostgreSQL Anonymizer (anon) extension
pgsmcrypto0.1.1SECPostgreSQL SM Algorithm Extension
pg_enigma0.5.0SECEncrypted postgres data type
pgaudit18.0SECprovides auditing functionality
pgauditlogtofile1.7.6SECpgAudit addon to redirect audit log to an independent file
pg_auditor0.2SECAudit data changes and provide flashback ability
logerrors2.1.5SECFunction for collecting statistics about messages in logfile
pg_auth_mon3.0SECmonitor connection attempts per user
pg_jobmon1.4.1SECExtension for logging and monitoring functions in PostgreSQL
credcheck4.4SECcredcheck - postgresql plain text credential checker
pgcryptokey0.85SECcryptographic key management
login_hook1.7SEClogin_hook - hook to execute login_hook.login() at login time
set_user4.2.0SECsimilar to SET ROLE but with added logging
pg_snakeoil1.4SECThe PostgreSQL Antivirus
pgextwlist1.19SECPostgreSQL Extension Whitelisting
sslutils1.4SECA Postgres extension for managing SSL certificates through SQL
pg_noset0.3.0SECModule for blocking SET variables for non-super users.
pg_tde1.0SECPercona pg_tde access method
sepgsql-SEClabel-based mandatory access control (MAC) based on SELinux security policy.
auth_delay-SECpause briefly before reporting authentication failure
pgcrypto1.3SECcryptographic functions
passwordcheck-SECchecks user passwords and reject weak password

FDW

ExtensionVersionCategoryDescription
wrappers0.5.7FDWForeign data wrappers developed by Supabase
multicorn3.2FDWFetch foreign data in Python in your PostgreSQL server.
odbc_fdw0.5.1FDWForeign data wrapper for accessing remote databases using ODBC
jdbc_fdw0.4.0FDWforeign-data wrapper for remote servers available over JDBC
pgspider_ext1.3.0FDWforeign-data wrapper for remote PGSpider servers
mysql_fdw2.9.3FDWForeign data wrapper for querying a MySQL server
oracle_fdw2.8.0FDWforeign data wrapper for Oracle access
tds_fdw2.0.5FDWForeign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)
db2_fdw18.0.1FDWforeign data wrapper for DB2 access
sqlite_fdw2.5.0FDWSQLite Foreign Data Wrapper
pgbouncer_fdw1.4.0FDWExtension for querying PgBouncer stats from normal SQL views & running pgbouncer commands from normal SQL functions
etcd_fdw0.0.0FDWForeign data wrapper for etcd
mongo_fdw5.5.3FDWforeign data wrapper for MongoDB access
redis_fdw1.0FDWForeign data wrapper for querying a Redis server
pg_redis_pubsub0.0.1FDWSend redis pub/sub messages to Redis from PostgreSQL Directly
kafka_fdw0.0.3FDWkafka Foreign Data Wrapper for CSV formatted messages
hdfs_fdw2.3.3FDWforeign-data wrapper for remote hdfs servers
firebird_fdw1.4.1FDWForeign data wrapper for Firebird
aws_s30.0.1FDWaws_s3 postgres extension to import/export data from/to s3
log_fdw1.4FDWforeign-data wrapper for Postgres log file access
dblink1.2FDWconnect to other PostgreSQL databases from within a database
file_fdw1.0FDWforeign-data wrapper for flat file access
postgres_fdw1.1FDWforeign-data wrapper for remote PostgreSQL servers

SIM

ExtensionVersionCategoryDescription
documentdb0.109SIMAPI surface for DocumentDB for PostgreSQL
orafce4.16.3SIMFunctions and operators that emulate a subset of functions and packages from the Oracle RDBMS
pgtt4.4SIMExtension to add Global Temporary Tables feature to PostgreSQL
session_variable3.4SIMRegistration and manipulation of session variables and constants
pg_statement_rollback1.5SIMServer side rollback at statement level for PostgreSQL like Oracle or DB2
pg_dbms_metadata1.0.0SIMExtension to add Oracle DBMS_METADATA compatibility to PostgreSQL
pg_dbms_lock1.0SIMExtension to add Oracle DBMS_LOCK full compatibility to PostgreSQL
pg_dbms_job1.5SIMExtension to add Oracle DBMS_JOB full compatibility to PostgreSQL
pg_dbms_errlog2.2SIMEmulate DBMS_ERRLOG Oracle module to log DML errors in a dedicated table.
babelfishpg_common3.3.3SIMSQL Server Transact SQL Datatype Support
babelfishpg_tsql3.3.1SIMSQL Server Transact SQL compatibility
babelfishpg_tds1.0.0SIMSQL Server TDS protocol extension
babelfishpg_money1.1.0SIMSQL Server Money Data Type
spat0.1.0a4SIMRedis-like In-Memory DB Embedded in Postgres
pgmemcache2.3.0SIMmemcached interface

ETL

ExtensionVersionCategoryDescription
pglogical2.4.6ETLPostgreSQL Logical Replication
pglogical_ticker1.4.1ETLHave an accurate view on pglogical replication delay
pgl_ddl_deploy2.2.1ETLautomated ddl deployment using pglogical
pg_failover_slots1.2.0ETLPG Failover Slots extension
db_migrator1.0.0ETLTools to migrate other databases to PostgreSQL
pgactive2.1.7ETLActive-Active Replication Extension for PostgreSQL
wal2json2.6ETLChanging data capture in JSON format
wal2mongo1.0.7ETLPostgreSQL logical decoding output plugin for MongoDB
decoderbufs3.4.0ETLLogical decoding plugin that delivers WAL stream changes using a Protocol Buffer format
decoder_raw1.0ETLOutput plugin for logical replication in Raw SQL format
mimeo1.5.1ETLExtension for specialized, per-table replication between PostgreSQL instances
repmgr5.5.0ETLReplication manager for PostgreSQL
pg_fact_loader2.0.1ETLbuild fact tables with Postgres
pg_bulkload3.1.23ETLpg_bulkload is a high speed data loading utility for PostgreSQL
test_decoding-ETLSQL-based test/example module for WAL logical decoding
pgoutput-ETLLogical Replication output plugin

6.4 - File Hierarchy

How Pigsty’s file system structure is designed and organized, and directory structures used by each module.

Pigsty FHS

Pigsty’s home directory is located at ~/pigsty by default. The file structure within this directory is as follows:

#------------------------------------------------------------------------------
# pigsty
#  ^-----@app                    # Extra application resources and examples
#  ^-----@bin                    # Utility scripts
#  ^-----@docs                   # Documentation (docsify-compatible)
#  ^-----@files                  # Ansible file resources
#            ^-----@victoria     # Victoria rules and ops scripts (bin/rules)
#            ^-----@grafana      # Grafana dashboards
#            ^-----@postgres     # /pg/bin/ scripts
#            ^-----@migration    # PGSQL migration task definitions
#            ^-----@pki          # Self-signed CA and certificates
#  ^-----@roles                  # Ansible role implementations
#  ^-----@templates              # Ansible template files
#  ^-----@vagrant                # Vagrant sandbox VM templates
#  ^-----@terraform              # Terraform cloud VM provisioning templates
#  ^-----configure               # Configuration wizard script
#  ^-----ansible.cfg             # Ansible default configuration
#  ^-----pigsty.yml              # Pigsty default configuration file
#  ^-----*.yml                   # Ansible playbooks
#------------------------------------------------------------------------------
# /infra -> /data/infra          # infra runtime symlink
# /data/infra                    # root:infra 0771
#  ^-----@metrics                # VictoriaMetrics TSDB data
#  ^-----@logs                   # VictoriaLogs data
#  ^-----@traces                 # VictoriaTraces data
#  ^-----@alertmgr               # AlertManager data
#  ^-----@rules                  # rule definitions (including agent.yml)
#  ^-----@targets                # FileSD monitoring targets
#  ^-----@dashboards             # Grafana dashboard definitions
#  ^-----@datasources            # Grafana datasource definitions
#  ^-----prometheus.yml          # Victoria Prometheus-compatible config
#------------------------------------------------------------------------------

CA FHS

Pigsty’s self-signed CA is located in files/pki/ under the Pigsty home directory.

You must keep the CA key file secure: files/pki/ca/ca.key. This key is generated by the ca role during deploy.yml or infra.yml execution.

# pigsty/files/pki                           # (local_user) 0755
#  ^-----@ca                                 # (local_user) 0700
#         ^[email protected]                      # 0600, CRITICAL: keep secret
#         ^[email protected]                      # 0644, CRITICAL: trust anchor
#  ^-----@csr                                # (local_user) 0755, CSRs
#  ^-----@misc                               # (local_user) 0755, misc/issued certs
#  ^-----@etcd                               # (local_user) 0755, ETCD certs
#  ^-----@minio                              # (local_user) 0755, MinIO certs
#  ^-----@nginx                              # (local_user) 0755, Nginx SSL certs
#  ^-----@infra                              # (local_user) 0755, infra client certs
#  ^-----@pgsql                              # (local_user) 0755, PostgreSQL certs
#  ^-----@mongo                              # (local_user) 0755, Mongo/FerretDB certs
#  ^-----@mysql                              # (local_user) 0755, MySQL certs (placeholder)

Nodes managed by Pigsty will have the following certificate files installed:

/etc/pki/ca.crt                             # root:root 0644, root cert on all nodes
/etc/pki/ca-trust/source/anchors/ca.crt     # Symlink to system trust anchors

All infra nodes will have the following certificates:

/etc/pki/infra.crt                          # root:infra 0644, infra node cert
/etc/pki/infra.key                          # root:infra 0640, infra node key

When your admin node fails, the files/pki directory and pigsty.yml file should be available on the backup admin node. You can use rsync to achieve this:

# run on meta-1, rsync to meta2
cd ~/pigsty;
rsync -avz ./ meta-2:~/pigsty

INFRA FHS

The infra role creates infra_data (default: /data/infra) and creates a symlink /infra -> /data/infra. /data/infra permissions are root:infra 0771; subdirectories default to *:infra 0750 unless overridden:

# /infra -> /data/infra
# /data/infra                              # root:infra 0771
#  ^-----@pgadmin                          # 5050:5050 0700
#  ^-----@alertmgr                         # prometheus:infra 0700
#  ^-----@conf                             # root:infra 0750
#            ^-----patronictl.yml          # root:admin 0640
#  ^-----@tmp                              # root:infra 0750
#  ^-----@hosts                            # dnsmasq:dnsmasq 0755 (DNS records)
#            ^-----default                 # root:root 0644
#  ^-----@datasources                      # root:infra 0750
#            ^-----*.json                  # 0600 (generated by register)
#  ^-----@dashboards                       # grafana:infra 0750
#  ^-----@metrics                          # victoria:infra 0750
#  ^-----@logs                             # victoria:infra 0750
#  ^-----@traces                           # victoria:infra 0750
#  ^-----@bin                              # victoria:infra 0750
#            ^-----check|new|reload|status # root:infra 0755
#  ^-----@rules                            # victoria:infra 0750
#            ^-----agent.yml               # victoria:infra 0644
#            ^-----infra.yml               # victoria:infra 0644
#            ^-----node.yml                # victoria:infra 0644
#            ^-----pgsql.yml               # victoria:infra 0644
#            ^-----redis.yml               # victoria:infra 0644
#            ^-----etcd.yml                # victoria:infra 0644
#            ^-----minio.yml               # victoria:infra 0644
#            ^-----kafka.yml               # victoria:infra 0644
#            ^-----mysql.yml               # victoria:infra 0644
#  ^-----@targets                          # victoria:infra 0750
#            ^-----@infra                  # infra targets (files 0640)
#            ^-----@node                   # node targets (files 0640)
#            ^-----@ping                   # ping targets (files 0640)
#            ^-----@etcd                   # etcd targets (files 0640)
#            ^-----@pgsql                  # pgsql targets (files 0640)
#            ^-----@pgrds                  # pgrds targets (files 0640)
#            ^-----@redis                  # redis targets (files 0640)
#            ^-----@minio                  # minio targets (files 0640)
#            ^-----@mongo                  # mongo targets (files 0640)
#            ^-----@juice                  # juicefs targets (files 0640)
#            ^-----@mysql                  # mysql targets (files 0640)
#            ^-----@kafka                  # kafka targets (files 0640)
#            ^-----@docker                 # docker targets (files 0640)
#            ^-----@patroni                # patroni SSL targets (files 0640)
#  ^-----prometheus.yml                    # victoria:infra 0644

This structure is created by: roles/infra/tasks/dir.yml, roles/infra/tasks/victoria.yml, roles/infra/tasks/register.yml, roles/infra/tasks/dns.yml, and roles/infra/tasks/env.yml.


NODE FHS

The node data directory is specified by node_data, defaulting to /data, owned by root:root with mode 0755.

Each component’s default data directory is located under this data directory:

/data                                 # root:root 0755
#  ^-----@postgres                    # postgres:postgres 0700 (default pg_fs_main)
#  ^-----@backups                     # postgres:postgres 0700 (default pg_fs_backup)
#  ^-----@redis                       # redis:redis 0700 (shared by multiple instances)
#  ^-----@minio                       # minio:minio 0750 (single-node single-disk mode)
#  ^-----@etcd                        # etcd:etcd 0700 (etcd_data)
#  ^-----@infra                       # root:infra 0771 (infra module data directory)
#  ^-----@docker                      # root:root 0755 (Docker data directory)
#  ^-----@...                         # Other component data directories

Victoria FHS

Monitoring config has moved from the legacy /etc/prometheus layout to the /infra runtime layout. The main template is roles/infra/templates/victoria/prometheus.yml, rendered to /infra/prometheus.yml.

files/victoria/bin/* and files/victoria/rules/* are synced to /infra/bin/ and /infra/rules/, while each module registers FileSD targets under /infra/targets/*.

# /infra
#  ^-----prometheus.yml              # Victoria main config (Prometheus-compatible) 0644
#  ^-----@bin                        # Utility scripts (check/new/reload/status) 0755
#  ^-----@rules                      # Recording and alerting rules (*.yml 0644)
#            ^-----agent.yml         # Agent pre-aggregation rules
#            ^-----infra.yml         # infra rules and alerts
#            ^-----etcd.yml          # etcd rules and alerts
#            ^-----node.yml          # node rules and alerts
#            ^-----pgsql.yml         # pgsql rules and alerts
#            ^-----redis.yml         # redis rules and alerts
#            ^-----minio.yml         # minio rules and alerts
#            ^-----kafka.yml         # kafka rules and alerts
#            ^-----mysql.yml         # mysql rules and alerts
#  ^-----@targets                    # FileSD targets (*.yml 0640)
#            ^-----@infra            # infra static targets
#            ^-----@node             # node static targets
#            ^-----@pgsql            # pgsql static targets
#            ^-----@pgrds            # pgsql remote RDS targets
#            ^-----@redis            # redis static targets
#            ^-----@minio            # minio static targets
#            ^-----@mongo            # mongo static targets
#            ^-----@mysql            # mysql static targets
#            ^-----@etcd             # etcd static targets
#            ^-----@ping             # ping static targets
#            ^-----@kafka            # kafka static targets
#            ^-----@juice            # juicefs static targets
#            ^-----@docker           # docker static targets
#            ^-----@patroni          # patroni static targets (when SSL enabled)
# /etc/default/vmetrics              # vmetrics startup args (victoria:infra 0644)
# /etc/default/vlogs                 # vlogs startup args (victoria:infra 0644)
# /etc/default/vtraces               # vtraces startup args (victoria:infra 0644)
# /etc/default/vmalert               # vmalert startup args (victoria:infra 0644)
# /etc/alertmanager.yml              # alertmanager main config (prometheus:infra 0644)
# /etc/default/alertmanager          # alertmanager env (prometheus:infra 0640)
# /etc/blackbox.yml                  # blackbox main config (prometheus:infra 0644)
# /etc/default/blackbox_exporter     # blackbox env (prometheus:infra 0644)

PostgreSQL FHS

The following parameters are related to PostgreSQL directory layout:

  • pg_dbsu_home: Postgres default user home directory, default: /var/lib/pgsql
  • pg_bin_dir: Postgres binary directory, default: /usr/pgsql/bin/
  • pg_data: Postgres data directory, default: /pg/data
  • pg_fs_main: Postgres primary data directory, default: /data/postgres
  • pg_fs_backup: Postgres backup disk mount point, default: /data/backups (optional; can also be a subdirectory on primary disk)
  • pg_cluster_dir: Derived variable, {{ pg_fs_main }}/{{ pg_cluster }}-{{ pg_version }}
  • pg_backup_dir: Derived variable, {{ pg_fs_backup }}/{{ pg_cluster }}-{{ pg_version }}
#--------------------------------------------------------------#
# Working assumptions:
#   {{ pg_fs_main   }} primary data directory, default: `/data/postgres` [SSD]
#   {{ pg_fs_backup }} backup data disk, default: `/data/backups`        [HDD]
#--------------------------------------------------------------#
# Default config (pg_cluster=pg-test, pg_version=18):
#     pg_fs_main = /data/postgres      High-speed SSD
#     pg_fs_backup = /data/backups     Cheap HDD (optional)
#
#     /pg        -> /data/postgres/pg-test-18
#     /pg/data   -> /data/postgres/pg-test-18/data
#     /pg/backup -> /data/backups/pg-test-18/backup
#--------------------------------------------------------------#
- name: create pgsql directories
  tags: pg_dir
  become: true
  block:

    - name: create pgsql directories
      file: path={{ item.path }} state=directory owner={{ item.owner|default(pg_dbsu) }} group={{ item.group|default('postgres') }} mode={{ item.mode }}
      with_items:
        - { path: "{{ pg_fs_main }}"            ,mode: "0700" }
        - { path: "{{ pg_fs_backup }}"          ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}"        ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/bin"    ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/log"    ,mode: "0750" }
        - { path: "{{ pg_cluster_dir }}/tmp"    ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/cert"   ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/conf"   ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/data"   ,mode: "0700" }
        - { path: "{{ pg_cluster_dir }}/spool"  ,mode: "0700" }
        - { path: "{{ pg_backup_dir }}/backup"  ,mode: "0700" }
        - { path: "/var/run/postgresql"         ,owner: root, group: root, mode: "0755" }

    - name: link pgsql directories
      file: src={{ item.src }} dest={{ item.dest }} state=link
      with_items:
        - { src: "{{ pg_backup_dir }}/backup" ,dest: "{{ pg_cluster_dir }}/backup" }
        - { src: "{{ pg_cluster_dir }}"       ,dest: "/pg" }

Data File Structure

# Physical directories
{{ pg_fs_main }}     /data/postgres                    # postgres:postgres 0700, primary data directory
{{ pg_cluster_dir }} /data/postgres/pg-test-18         # postgres:postgres 0700, cluster directory
                     /data/postgres/pg-test-18/bin     # postgres:postgres 0700 (scripts root:postgres 0755)
                     /data/postgres/pg-test-18/log     # postgres:postgres 0750, logs
                     /data/postgres/pg-test-18/tmp     # postgres:postgres 0700, temp files
                     /data/postgres/pg-test-18/cert    # postgres:postgres 0700, certs
                     /data/postgres/pg-test-18/conf    # postgres:postgres 0700, config index
                     /data/postgres/pg-test-18/data    # postgres:postgres 0700, main data
                     /data/postgres/pg-test-18/spool   # postgres:postgres 0700, pgBackRest spool
                     /data/postgres/pg-test-18/backup  # -> /data/backups/pg-test-18/backup

{{ pg_fs_backup  }}  /data/backups                     # postgres:postgres 0700, optional backup mount
{{ pg_backup_dir }}  /data/backups/pg-test-18          # postgres:postgres 0700, cluster backup directory
                     /data/backups/pg-test-18/backup   # postgres:postgres 0700, actual backup location

# Symlinks
/pg             ->   /data/postgres/pg-test-18         # pg root symlink
/pg/data        ->   /data/postgres/pg-test-18/data    # pg data directory
/pg/backup      ->   /data/backups/pg-test-18/backup   # pg backup directory

Binary File Structure

On EL-compatible distributions (using yum), PostgreSQL default installation location is:

/usr/pgsql-${pg_version}/

Pigsty creates a symlink named /usr/pgsql pointing to the actual version specified by the pg_version parameter, for example:

/usr/pgsql -> /usr/pgsql-18

Therefore, the default pg_bin_dir is /usr/pgsql/bin/, and this path is added to the system PATH environment variable, defined in: /etc/profile.d/pgsql.sh.

export PATH="/usr/pgsql/bin:/pg/bin:$PATH"
export PGHOME=/usr/pgsql
export PGDATA=/pg/data

On Ubuntu/Debian, the default PostgreSQL Deb package installation location is:

/usr/lib/postgresql/${pg_version}/bin

Pgbouncer FHS

Pgbouncer runs under the same user as {{ pg_dbsu }} (default postgres), with configs in /etc/pgbouncer.

  • pgbouncer.ini: main pool configuration (postgres:postgres 0640)
  • database.txt: pooled database definitions (postgres:postgres 0600)
  • useropts.txt: per-user connection options (postgres:postgres 0600)
  • userlist.txt: password file maintained by /pg/bin/pgb-user
  • pgb_hba.conf: access control file (postgres:postgres 0600)
/etc/pgbouncer/                # postgres:postgres 0750
/etc/pgbouncer/pgbouncer.ini   # postgres:postgres 0640
/etc/pgbouncer/database.txt    # postgres:postgres 0600
/etc/pgbouncer/useropts.txt    # postgres:postgres 0600
/etc/pgbouncer/userlist.txt    # postgres:postgres (managed by pgb-user)
/etc/pgbouncer/pgb_hba.conf    # postgres:postgres 0600
/pg/log/pgbouncer              # postgres:postgres 0750
/var/run/postgresql            # {{ pg_dbsu }}:postgres 0755 (managed by tmpfiles)

Redis FHS

Pigsty provides basic support for Redis deployment and monitoring.

Redis binaries are usually installed by the system package manager (service paths use /bin/*, with /usr/bin/* compatibility symlinks on most distros):

redis-server
redis-cli
redis-sentinel
redis-check-rdb
redis-check-aof
redis-benchmark
/usr/libexec/redis-shutdown

For a Redis instance named redis-test-1-6379, the related resources are as follows:

/usr/lib/systemd/system/redis-test-1-6379.service     # root:root 0644 (Debian: /lib/systemd/system)
/etc/redis/                                           # redis:redis 0700
/etc/redis/redis-test-1-6379.conf                     # redis:redis 0700
/data/redis/                                          # redis:redis 0700
/data/redis/redis-test-1-6379                         # redis:redis 0700
/data/redis/redis-test-1-6379/redis-test-1-6379.rdb   # RDB file
/data/redis/redis-test-1-6379/redis-test-1-6379.aof   # AOF file
/var/log/redis/                                       # redis:redis 0700
/var/log/redis/redis-test-1-6379.log                  # logs
/var/run/redis/                                       # redis:redis 0700 (tmpfiles creates 0755 at boot)
/var/run/redis/redis-test-1-6379.pid                  # PID

For Ubuntu/Debian, the default systemd service directory is /lib/systemd/system/ instead of /usr/lib/systemd/system/.

6.5 - Parameters

Pigsty v4.0 configuration overview and module parameter navigation

This is the parameter navigation page for Pigsty v4.0, without repeating full explanations for each parameter. For parameter details, please read each module’s param page.

According to current documentation scope, official modules contain about 360 parameters across 10 modules.


Module Parameter Navigation

ModuleGroupsCountDescription
PGSQL9125PostgreSQL HA cluster configuration
INFRA1072Software repository and Victoria-based observability infra
NODE1173Node initialization, system tuning, and ops baseline
ETCD213ETCD cluster and removal safeguard parameters
MINIO221MinIO deployment and removal parameters
REDIS221Redis deployment and removal parameters
FERRET19FerretDB (Mongo API) parameters
DOCKER18Docker engine parameters
JUICE12JuiceFS instance and cache parameters
VIBE116Code/Jupyter/Node.js/Claude configuration

Parameter Group Quick View

ModuleMajor Groups
PGSQLPG_ID, PG_BUSINESS, PG_INSTALL, PG_BOOTSTRAP, PG_PROVISION, PG_BACKUP, PG_ACCESS, PG_MONITOR, PG_REMOVE
INFRAMETA, CA, INFRA_ID, REPO, INFRA_PACKAGE, NGINX, DNS, VICTORIA, PROMETHEUS, GRAFANA
NODENODE_ID, NODE_DNS, NODE_PACKAGE, NODE_TUNE, NODE_SEC, NODE_ADMIN, NODE_TIME, NODE_VIP
HAPROXY, NODE_EXPORTER, VECTOR
ETCDETCD, ETCD_REMOVE
MINIOMINIO, MINIO_REMOVE
REDISREDIS, REDIS_REMOVE
FERRETFERRET
DOCKERDOCKER
JUICEJUICE
VIBEVIBE

Recommendations

  • Read in this order for first deployment: NODE, INFRA, PGSQL
  • In production, always review: *_safeguard, password credentials, ports, and network exposure
  • Validate changes on one cluster first, then roll out globally in batches

6.6 - Playbooks

Pigsty v4.0 preset Ansible playbook navigation and execution notes

This page summarizes Pigsty v4.0 playbook entries and usage guidance by module. For detailed task tags, open each module’s playbook page.

Module Playbook Navigation

ModuleCountPlaybooks
INFRA3deploy.yml infra.yml infra-rm.yml
NODE2node.yml node-rm.yml
ETCD2etcd.yml etcd-rm.yml
PGSQL7pgsql.yml pgsql-rm.yml
pgsql-user.yml pgsql-db.yml
pgsql-monitor.yml pgsql-migration.yml pgsql-pitr.yml
REDIS2redis.yml redis-rm.yml
MINIO2minio.yml minio-rm.yml
FERRET1mongo.yml
DOCKER1docker.yml
JUICE1juice.yml
VIBE1vibe.yml

Playbook Matrix

PlaybookModulePurpose
deploy.ymlINFRAOne-pass deployment for the core chain (Infra/Node/Etcd/PGSQL, enabling MinIO by config)
infra.ymlINFRAInitialize infrastructure nodes
infra-rm.ymlINFRARemove infrastructure components
node.ymlNODENode onboarding and baseline convergence
node-rm.ymlNODENode offboarding
etcd.ymlETCDETCD install/scale-out
etcd-rm.ymlETCDETCD remove/scale-in
pgsql.ymlPGSQLInitialize PostgreSQL cluster or add instance
pgsql-rm.ymlPGSQLRemove PostgreSQL cluster/instance
pgsql-user.ymlPGSQLAdd business users
pgsql-db.ymlPGSQLAdd business databases
pgsql-monitor.ymlPGSQLRegister remote PostgreSQL for monitoring
pgsql-migration.ymlPGSQLGenerate migration runbook and scripts
pgsql-pitr.ymlPGSQLPoint-in-time recovery (PITR)
redis.ymlREDISDeploy Redis
redis-rm.ymlREDISRemove Redis
minio.ymlMINIODeploy MinIO
minio-rm.ymlMINIORemove MinIO
mongo.ymlFERRETDeploy FerretDB (Mongo API)
docker.ymlDOCKERDeploy Docker engine
juice.ymlJUICEDeploy/remove JuiceFS instances
vibe.ymlVIBEDeploy VIBE dev environment

Auxiliary Playbooks

The following playbooks are cross-module helpers.

PlaybookDescription
cache.ymlBuild offline installation package cache
cert.ymlIssue certificates using Pigsty CA
app.ymlInstall Docker Compose app templates
slim.ymlMinimal component installation scenario

Playbook Usage Notes

Protection Mechanism

Several modules provide deletion safeguards through *_safeguard parameters:

By default, these safeguard parameters are undefined (not enabled). In production, explicitly set them to true for initialized clusters.

When safeguard is true, corresponding *-rm.yml playbooks abort immediately. You can force override via CLI:

./pgsql-rm.yml -l pg-test -e pg_safeguard=false
./etcd-rm.yml  -l etcd    -e etcd_safeguard=false
./minio-rm.yml -l minio   -e minio_safeguard=false

Limiting Execution Scope

Use -l to limit execution targets:

./pgsql.yml -l pg-meta            # run only on pg-meta cluster
./node.yml -l 10.10.10.10         # run only on one node
./redis.yml -l redis-test         # run only on redis-test cluster

For large-scale rollout, validate on one cluster first, then deploy in batches.

Idempotency

Most playbooks are idempotent and safe to rerun, with caveats:

  • infra.yml does not clean data by default; all clean parameters (vmetrics_clean, vlogs_clean, vtraces_clean, grafana_clean, nginx_clean) default to false
  • To rebuild from a clean state, explicitly set relevant clean parameters to true
  • Re-running *-rm.yml deletion playbooks requires extra caution

Task Tags

Use -t to run only selected task subsets:

./pgsql.yml -l pg-test -t pg_service    # refresh services only on pg-test
./node.yml -t haproxy                   # configure haproxy only
./etcd.yml -t etcd_launch               # restart etcd only

Quick Command Reference

INFRA Module

./deploy.yml                     # one-pass full Pigsty deployment
./infra.yml                      # initialize infrastructure
./infra-rm.yml                   # remove infrastructure
./cache.yml                      # build offline package cache
./cert.yml -e cn=<name>          # issue client certificate

NODE Module

./node.yml -l <cls|ip>           # add node
./node-rm.yml -l <cls|ip>        # remove node
bin/node-add <cls|ip>            # add node (wrapper)
bin/node-rm <cls|ip>             # remove node (wrapper)

ETCD Module

./etcd.yml                       # initialize etcd cluster
./etcd-rm.yml                    # remove etcd cluster
bin/etcd-add <ip>                # add etcd member (wrapper)
bin/etcd-rm <ip>                 # remove etcd member (wrapper)

PGSQL Module

./pgsql.yml -l <cls>                             # initialize PostgreSQL cluster
./pgsql-rm.yml -l <cls>                          # remove PostgreSQL cluster
./pgsql-user.yml -l <cls> -e username=<user>     # create business user
./pgsql-db.yml -l <cls> -e dbname=<db>           # create business database
./pgsql-monitor.yml -e clsname=<cls>             # monitor remote cluster
./pgsql-migration.yml -e@files/migration/<cls>.yml  # generate migration runbook
./pgsql-pitr.yml -l <cls> -e '{"pg_pitr": {}}'     # execute PITR recovery

bin/pgsql-add <cls>              # initialize cluster (wrapper)
bin/pgsql-rm <cls>               # remove cluster (wrapper)
bin/pgsql-user <cls> <user>      # create user (wrapper)
bin/pgsql-db <cls> <db>          # create database (wrapper)
bin/pgsql-svc <cls>              # refresh services (wrapper)
bin/pgsql-hba <cls>              # reload HBA (wrapper)
bin/pgmon-add <cls>              # monitor remote cluster (wrapper)

REDIS Module

./redis.yml -l <cls>             # initialize Redis cluster
./redis-rm.yml -l <cls>          # remove Redis cluster

MINIO Module

./minio.yml -l <cls>             # initialize MinIO cluster
./minio-rm.yml -l <cls>          # remove MinIO cluster

FERRET Module

./mongo.yml -l ferret            # install FerretDB

DOCKER Module

./docker.yml -l <host>           # install Docker
./app.yml -e app=<name>          # deploy Docker Compose app

6.7 - Port List

Default ports used by Pigsty components, with related parameters and status.

This page lists default ports used by Pigsty module components. Adjust as needed or use as a reference for fine-grained firewall configuration.

ModuleComponentPortParameterStatus
NODEnode_exporter9100node_exporter_portEnabled
NODEhaproxy9101haproxy_exporter_portEnabled
NODEvector9598vector_portEnabled
NODEkeepalived_exporter9650vip_exporter_portOptional
NODEchronyd123-Enabled
DOCKERdocker9323docker_exporter_portOptional
INFRAnginx80nginx_portEnabled
INFRAnginx443nginx_ssl_portEnabled
INFRAnginx_exporter9113nginx_exporter_portEnabled
INFRAgrafana3000grafana_portEnabled
INFRAvictoriaMetrics8428vmetrics_portEnabled
INFRAvictoriaLogs9428vlogs_portEnabled
INFRAvictoriaTraces10428vtraces_portEnabled
INFRAvmalert8880vmalert_portEnabled
INFRAalertmanager9059alertmanager_portEnabled
INFRAblackbox_exporter9115blackbox_portEnabled
INFRAdnsmasq53dns_portEnabled
ETCDetcd2379etcd_portEnabled
ETCDetcd2380etcd_peer_portEnabled
MINIOminio9000minio_portEnabled
MINIOminio9001minio_admin_portEnabled
REDISredis6379redis_portOptional
REDISredis_exporter9121redis_exporter_portOptional
FERRETferretdb27017mongo_portOptional
FERRETferretdb (TLS)27018mongo_ssl_portOptional
FERRETmongo_exporter9216mongo_exporter_portEnabled
VIBEcode-server8443code_portOptional
VIBEjupyterlab8888jupyter_portOptional
PGSQLpostgres5432pg_portEnabled
PGSQLpgbouncer6432pgbouncer_portEnabled
PGSQLpatroni8008patroni_portEnabled
PGSQLpg_exporter9630pg_exporter_portEnabled
PGSQLpgbouncer_exporter9631pgbouncer_exporter_portEnabled
PGSQLpgbackrest_exporter9854pgbackrest_exporter_portEnabled
PGSQL{{ pg_cluster }}-primary5433pg_default_servicesEnabled
PGSQL{{ pg_cluster }}-replica5434pg_default_servicesEnabled
PGSQL{{ pg_cluster }}-default5436pg_default_servicesEnabled
PGSQL{{ pg_cluster }}-offline5438pg_default_servicesEnabled
PGSQL{{ pg_cluster }}-<service>543xpg_servicesOptional

Public Port Recommendations

If you use firewall zone mode, expose only minimum required ports via node_firewall_public_port:

  • Minimal management surface: 22, 80, 443 (recommended)
  • If public direct DB access is required: additionally expose 5432

Avoid exposing internal component ports directly to the public internet: etcd (2379/2380), patroni (8008), exporters (9xxx), minio (9000/9001), redis (6379), ferretdb (27017/27018), etc.

node_firewall_mode: zone
node_firewall_public_port: [22, 80, 443, 5432]

7 - Applications

Software and tools that use PostgreSQL can be managed by the docker daemon

PostgreSQL is the most popular database in the world, and countless software is built on PostgreSQL, around PostgreSQL, or serves PostgreSQL itself, such as

  • Application software” that uses PostgreSQL as the preferred database
  • Tooling software” that serves PostgreSQL software development and management
  • Database software” that derives, wraps, forks, modifies, or extends PostgreSQL

And Pigsty just have a series of Docker Compose templates for these software, application and databases:

NameWebsiteTypeStatePortDomainDescription
SupabaseSupabaseDBGA8000supa.pigstyOSS Firebase Alternative, Backend as Platform
PolarDBPolarDBDBGA5532OSS RAC for PostgreSQL
FerretDBFerretDBDBGA27017OSS Mongo Alternative base on PostgreSQL
MinIOMinIODBGA9000sss.pigstyOSS AWS S3 Alternative, Simple Storage Service
EdgeDBEdgeDBDBTBDOSS Graph Database base on PostgreSQL
NocoDBNocoDBAPPGA8080noco.pigstyOSS Airtable Alternative over PostgreSQL
OdooOdooAPPGA8069odoo.pigstyOSS ERP Software base on PostgreSQL
DifyDifyAPPGA8001dify.pigstyOSS AI Workflow Orachestration & LLMOps Platform
JupyterJupyterAPPGAlab.pigstyOSS AI Python Notebook & Data Analysis IDE
GiteaGiteaAPPGA8889git.pigstyOSS DevOps Git Service
WikiWiki.jsAPPGA9002wiki.pigstyOSS Wiki Software
GitLabGitLabAPPTBDOSS GitHub Alternative, Code Management Platform
MastodonMastodonAPPTBDOSS Decentralized Social Network
KeycloakKeycloakAPPTBDOSS Identity & Access Management Component
HarbourHarbourAPPTBDOSS Docker/K8S Image Repository
ConfluenceConfluenceAPPTBDEnterprise Knowledge Management System
JiraJiraAPPTBDEnterprise Project Management Tools
ZabbixZabbix 7APPTBDOSS Monitoring Platform for Enterprise
GrafanaGrafanaAPPTBDDashboard, Data Visualization & Monitoring Platform
MetabaseMetabaseAPPGA9004mtbs.pigstyFast analysis of data from multiple data sources
ByteBaseByteBaseAPPGA8887ddl.pigstyDatabase Migration Tool for PostgreSQL
KongKongTOOLGA8000api.pigstyOSS API Gateway based on Nginx/OpenResty
PostgRESTPostgRESTTOOLGA8884api.pigstyGenerate RESTAPI from PostgreSQL Schemas
pgAdmin4pgAdmin4TOOLGA8885adm.pigstyPostgreSQL GUI Admin Tools
pgWebpgWebTOOLGA8886cli.pigstyPostgreSQL Web GUI Client
SchemaSpySchemaSpyTOOLTBDDump & Visualize PostgreSQL Schema
pgBadgerpgBadgerTOOLTBDPostgreSQL Log Analysis
pg_exporterpg_exporterTOOLGA9630Expose PostgreSQL & Pgbouncer Metrics for Prometheus

7.1 - Enterprise Self-Hosted Supabase

Self-host enterprise-grade Supabase with Pigsty, featuring monitoring, high availability, PITR, IaC, and 440+ PostgreSQL extensions.

Supabase is great, but having your own Supabase is even better. Pigsty can help you deploy enterprise-grade Supabase on your own servers (physical, virtual, or cloud) with a single command — more extensions, better performance, deeper control, and more cost-effective.

Pigsty is one of three self-hosting approaches listed on the Supabase official documentation: Self-hosting: Third-Party Guides

This tutorial requires basic Linux knowledge. Otherwise, consider using Supabase cloud or plain Docker Compose self-hosting.


TL;DR

Prepare a Linux server, follow the Pigsty standard single-node installation process with the supabase config template:

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./configure -c supabase    # Use supabase config (change credentials in pigsty.yml)
vi pigsty.yml              # Edit domain, passwords, keys...
./deploy.yml               # Standard single-node Pigsty deployment
./docker.yml               # Install Docker module
./app.yml                  # Start Supabase stateless components (may be slow)

After installation, access Supa Studio on port 8000 with username supabase and password pigsty.


Checklist


Table of Contents


What is Supabase?

Supabase is a BaaS (Backend as a Service), an open-source Firebase alternative, and the most popular database + backend solution in the AI Agent era. Supabase wraps PostgreSQL and provides authentication, messaging, edge functions, object storage, and automatically generates REST and GraphQL APIs based on your database schema.

Supabase aims to provide developers with a one-stop backend solution, reducing the complexity of developing and maintaining backend infrastructure. It allows developers to skip most backend development work — you only need to understand database design and frontend to ship quickly! Developers can use vibe coding to create a frontend and database schema to rapidly build complete applications.

Currently, Supabase is the most popular open-source project in the PostgreSQL ecosystem, with over 90,000 GitHub stars. Supabase also offers a “generous” free tier for small startups — free 500 MB storage, more than enough for storing user tables and analytics data.


Why Self-Host?

If Supabase cloud is so attractive, why self-host?

The most obvious reason is what we discussed in “Is Cloud Database an IQ Tax?”: when your data/compute scale exceeds the cloud computing sweet spot (Supabase: 4C/8G/500MB free storage), costs can explode. And nowadays, reliable local enterprise NVMe SSDs have three to four orders of magnitude cost advantage over cloud storage, and self-hosting can better leverage this.

Another important reason is functionality — Supabase cloud features are limited. Many powerful PostgreSQL extensions aren’t available in cloud services due to multi-tenant security challenges and licensing. Despite extensions being PostgreSQL’s core feature, only 64 extensions are available on Supabase cloud. Self-hosted Supabase with Pigsty provides up to 440 ready-to-use PostgreSQL extensions.

Additionally, self-control and vendor lock-in avoidance are important reasons for self-hosting. Although Supabase aims to provide a vendor-lock-free open-source Google Firebase alternative, self-hosting enterprise-grade Supabase is not trivial. Supabase includes a series of PostgreSQL extensions they develop and maintain, and plans to replace the native PostgreSQL kernel with OrioleDB (which they acquired). These kernels and extensions are not available in the official PGDG repository.

This is implicit vendor lock-in, preventing users from self-hosting in ways other than the supabase/postgres Docker image. Pigsty provides an open, transparent, and universal solution. We package all 10 missing Supabase extensions into ready-to-use RPM/DEB packages, ensuring they work on all major Linux distributions:

ExtensionDescription
pg_graphqlGraphQL support in PostgreSQL (Rust), provided by PIGSTY
pg_jsonschemaJSON Schema validation (Rust), provided by PIGSTY
wrappersSupabase foreign data wrapper bundle (Rust), provided by PIGSTY
index_advisorQuery index advisor (SQL), provided by PIGSTY
pg_netAsync non-blocking HTTP/HTTPS requests (C), provided by PIGSTY
vaultStore encrypted credentials in Vault (C), provided by PIGSTY
pgjwtJSON Web Token API implementation (SQL), provided by PIGSTY
pgsodiumTable data encryption TDE, provided by PIGSTY
supautilsSecurity utilities for cloud environments (C), provided by PIGSTY
pg_plan_filterFilter queries by execution plan cost (C), provided by PIGSTY

We also install most extensions by default in Supabase deployments. You can enable them as needed.

Pigsty also handles the underlying highly available PostgreSQL cluster, highly available MinIO object storage cluster, and even Docker deployment, Nginx reverse proxy, domain configuration, and HTTPS certificate issuance. You can spin up any number of stateless Supabase container clusters using Docker Compose and store state in external Pigsty-managed database services.

With this self-hosted architecture, you gain the freedom to use different kernels (PG 15-18, OrioleDB), install 437 extensions, scale Supabase/Postgres/MinIO, freedom from database operations, and freedom from vendor lock-in — running locally forever. Compared to cloud service costs, you only need to prepare servers and run a few commands.


Single-Node Quick Start

Let’s start with single-node Supabase deployment. We’ll cover multi-node high availability later.

Prepare a fresh Linux server, use the Pigsty supabase configuration template for standard installation, then run docker.yml and app.yml to start stateless Supabase containers (default ports 8000/8433).

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./configure -c supabase    # Use supabase config (change credentials in pigsty.yml)
vi pigsty.yml              # Edit domain, passwords, keys...
./deploy.yml               # Install Pigsty
./docker.yml               # Install Docker module
./app.yml                  # Start Supabase stateless components with Docker

Before deploying Supabase, modify the auto-generated pigsty.yml configuration file (domain and passwords) according to your needs. For local development/testing, you can skip this and customize later.

If configured correctly, after about ten minutes, you can access the Supabase Studio GUI at http://<your_ip_address>:8000 on your local network. Default username and password are supabase and pigsty.

Notes:

  • In mainland China, Pigsty uses 1Panel and 1ms DockerHub mirrors by default, which may be slow.
  • You can configure your own proxy and registry mirror, then manually pull images with cd /opt/supabase; docker compose pull. We also offer expert consulting services including complete offline installation packages.
  • If you need object storage functionality, you must access Supabase via domain and HTTPS, otherwise errors will occur.
  • For serious production deployments, always change all default passwords!

Key Technical Decisions

Here are some key technical decisions for self-hosting Supabase:

Single-node deployment doesn’t provide PostgreSQL/MinIO high availability. However, single-node deployment still has significant advantages over the official pure Docker Compose approach: out-of-the-box monitoring, freedom to install extensions, component scaling capabilities, and point-in-time recovery as a safety net.

If you only have one server or choose to self-host on cloud servers, Pigsty recommends using external S3 instead of local MinIO for object storage to hold PostgreSQL backups and Supabase Storage. This deployment provides a minimum safety net RTO (hour-level recovery time) / RPO (MB-level data loss) disaster recovery in single-node conditions.

For serious production deployments, Pigsty recommends at least 3-4 nodes, ensuring both MinIO and PostgreSQL use enterprise-grade multi-node high availability deployments. You’ll need more nodes and disks, adjusting cluster configuration in pigsty.yml and Supabase cluster configuration to use high availability endpoints.

Some Supabase features require sending emails, so SMTP service is needed. Unless purely for internal use, production deployments should use SMTP cloud services. Self-hosted mail servers’ emails are often marked as spam.

If your service is directly exposed to the public internet, we strongly recommend using real domain names and HTTPS certificates via Nginx Portal.

Next, we’ll discuss advanced topics for improving Supabase security, availability, and performance beyond single-node deployment.


Advanced: Security Hardening

Pigsty Components

For serious production deployments, we strongly recommend changing Pigsty component passwords. These defaults are public and well-known — going to production without changing passwords is like running naked:

These are Pigsty component passwords. Strongly recommended to set before installation.

Supabase Keys

Besides Pigsty component passwords, you need to change Supabase keys, including:

Please follow the Supabase tutorial: Securing your services:

  • Generate a JWT_SECRET with at least 40 characters, then use the tutorial tools to issue ANON_KEY and SERVICE_ROLE_KEY JWTs.
  • Use the tutorial tools to generate an ANON_KEY JWT based on JWT_SECRET and expiration time — this is the anonymous user credential.
  • Use the tutorial tools to generate a SERVICE_ROLE_KEY — this is the higher-privilege service role credential.
  • Specify a random string of at least 32 characters for PG_META_CRYPTO_KEY to encrypt Studio UI and meta service interactions.
  • If using different PostgreSQL business user passwords, modify POSTGRES_PASSWORD accordingly.
  • If your object storage uses different passwords, modify S3_ACCESS_KEY and S3_SECRET_KEY accordingly.

After modifying Supabase credentials, restart Docker Compose to apply:

./app.yml -t app_config,app_launch   # Using playbook
cd /opt/supabase; make up            # Manual execution

Advanced: Domain Configuration

If using Supabase locally or on LAN, you can directly connect to Kong’s HTTP port 8000 via IP:Port.

You can use an internal static-resolved domain, but for serious production deployments, we recommend using a real domain + HTTPS to access Supabase. In this case, your server should have a public IP, you should own a domain, use cloud/DNS/CDN provider’s DNS resolution to point to the node’s public IP (optional fallback: local /etc/hosts static resolution).

The simple approach is to batch-replace the placeholder domain (supa.pigsty) with your actual domain, e.g., supa.pigsty.cc:

sed -ie 's/supa.pigsty/supa.pigsty.cc/g' ~/pigsty/pigsty.yml

If not configured beforehand, reload Nginx and Supabase configuration:

make cert       # Request certbot free HTTPS certificate
./app.yml       # Reload Supabase configuration

The modified configuration should look like:

all:
  vars:
    certbot_sign: true                # Use certbot to sign real certificates
    infra_portal:
      home: i.pigsty.cc               # Replace with your domain!
      supa:
        domain: supa.pigsty.cc        # Replace with your domain!
        endpoint: "10.10.10.10:8000"
        websocket: true
        certbot: supa.pigsty.cc       # Certificate name, usually same as domain

  children:
    supabase:
      vars:
        apps:
          supabase:                                         # Supabase app definition
            conf:                                           # Override /opt/supabase/.env
              SITE_URL: https://supa.pigsty.cc              # <------- Change to your external domain name
              API_EXTERNAL_URL: https://supa.pigsty.cc      # <------- Otherwise the storage API may not work!
              SUPABASE_PUBLIC_URL: https://supa.pigsty.cc   # <------- Don't forget to set this in infra_portal!

For complete domain/HTTPS configuration, see Certificate Management. You can also use Pigsty’s built-in local static resolution and self-signed HTTPS certificates as fallback.


Advanced: External Object Storage

You can use S3 or S3-compatible services for PostgreSQL backups and Supabase object storage. Here we use Alibaba Cloud OSS as an example.

Pigsty provides a terraform/spec/aliyun-s3.tf template for provisioning a server and OSS bucket on Alibaba Cloud.

First, modify the S3 configuration in all.children.supa.vars.apps.[supabase].conf to point to Alibaba Cloud OSS:

# if using s3/minio as file storage
S3_BUCKET: data                       # Replace with S3-compatible service info
S3_ENDPOINT: https://sss.pigsty:9000  # Replace with S3-compatible service info
S3_ACCESS_KEY: s3user_data            # Replace with S3-compatible service info
S3_SECRET_KEY: S3User.Data            # Replace with S3-compatible service info
S3_FORCE_PATH_STYLE: true             # Replace with S3-compatible service info
S3_REGION: stub                       # Replace with S3-compatible service info
S3_PROTOCOL: https                    # Replace with S3-compatible service info

Reload Supabase configuration:

./app.yml -t app_config,app_launch

You can also use S3 as PostgreSQL backup repository. Add an aliyun backup repository definition in all.vars.pgbackrest_repo:

all:
  vars:
    pgbackrest_method: aliyun          # pgbackrest backup method: local,minio,[user-defined repos...]
    pgbackrest_repo:                   # pgbackrest backup repo: https://pgbackrest.org/configuration.html#section-repository
      aliyun:                          # Define new backup repo 'aliyun'
        type: s3                       # Alibaba Cloud OSS is S3-compatible
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: pigsty-oss
        s3_key: xxxxxxxxxxxxxx
        s3_key_secret: xxxxxxxx
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest.MyPass    # Set encryption password for pgBackRest backup repo
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days

Then specify aliyun backup repository in all.vars.pgbackrest_method and reset pgBackrest:

./pgsql.yml -t pgbackrest

Pigsty will switch the backup repository to external object storage. For more backup configuration, see PostgreSQL Backup.


Advanced: Using SMTP

You can use SMTP for sending emails. Modify the supabase app configuration with SMTP information:

all:
  children:
    supabase:        # supa group
      vars:          # supa group vars
        apps:        # supa group app list
          supabase:  # the supabase app
            conf:    # the supabase app conf entries
              SMTP_HOST: smtpdm.aliyun.com:80
              SMTP_PORT: 80
              SMTP_USER: [email protected]
              SMTP_PASS: your_email_user_password
              SMTP_SENDER_NAME: MySupabase
              SMTP_ADMIN_EMAIL: [email protected]
              ENABLE_ANONYMOUS_USERS: false

Don’t forget to reload configuration with app.yml.


Advanced: True High Availability

After these configurations, you have enterprise-grade Supabase with public domain, HTTPS certificate, SMTP, PITR backup, monitoring, IaC, and 400+ extensions (basic single-node version). For high availability configuration, see other Pigsty documentation. We offer expert consulting services for hands-on Supabase self-hosting — $400 USD to save you the hassle.

Single-node RTO/RPO relies on external object storage as a safety net. If your node fails, backups in external S3 storage let you redeploy Supabase on a new node and restore from backup. This provides minimum safety net RTO (hour-level recovery) / RPO (MB-level data loss) disaster recovery.

For RTO < 30s with zero data loss on failover, use multi-node high availability deployment:

  • ETCD: DCS needs three or more nodes to tolerate one node failure.
  • PGSQL: PostgreSQL synchronous commit (no data loss) mode recommends at least three nodes.
  • INFRA: Monitoring infrastructure failure has less impact; production recommends dual replicas.
  • Supabase stateless containers can also be multi-node replicas for high availability.

In this case, you also need to modify PostgreSQL and MinIO endpoints to use DNS / L2 VIP / HAProxy high availability endpoints. For these parts, follow the documentation for each Pigsty module. Reference conf/ha/trio.yml and conf/ha/safe.yml for upgrading to three or more nodes.

7.2 - Odoo: Self-Hosted Open Source ERP

How to spin up an out-of-the-box enterprise application suite Odoo and use Pigsty to manage its backend PostgreSQL database.

Odoo is an open-source enterprise resource planning (ERP) software that provides a full suite of business applications, including CRM, sales, purchasing, inventory, production, accounting, and other management functions. Odoo is a typical web application that uses PostgreSQL as its underlying database.

All your business on one platform — Simple, efficient, yet affordable

Public Demo (may not always be available): http://odoo.pigsty.io, username: [email protected], password: pigsty


Quick Start

On a fresh Linux x86/ARM server running a compatible operating system:

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./bootstrap                # Install Ansible
./configure -c app/odoo    # Use Odoo configuration (change credentials in pigsty.yml)
./deploy.yml               # Install Pigsty
./docker.yml               # Install Docker Compose
./app.yml                  # Start Odoo stateless components with Docker

Odoo listens on port 8069 by default. Access http://<ip>:8069 in your browser. The default username and password are both admin.

You can add a DNS resolution record odoo.pigsty pointing to your server in the browser host’s /etc/hosts file, allowing you to access the Odoo web interface via http://odoo.pigsty.

If you want to access Odoo via SSL/HTTPS, you need to use a real SSL certificate or trust the self-signed CA certificate automatically generated by Pigsty. (In Chrome, you can also type thisisunsafe to bypass certificate verification)


Configuration Template

conf/app/odoo.yml defines a template configuration file containing the resources required for a single Odoo instance.

all:
  children:

    # Odoo application (default username and password: admin/admin)
    odoo:
      hosts: { 10.10.10.10: {} }
      vars:
        app: odoo   # Specify app name to install (in apps)
        apps:       # Define all applications
          odoo:     # App name, should have corresponding ~/pigsty/app/odoo folder
            file:   # Optional directories to create
              - { path: /data/odoo         ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/webdata ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/addons  ,state: directory, owner: 100, group: 101 }
            conf:   # Override /opt/<app>/.env config file
              PG_HOST: 10.10.10.10            # PostgreSQL host
              PG_PORT: 5432                   # PostgreSQL port
              PG_USERNAME: odoo               # PostgreSQL user
              PG_PASSWORD: DBUser.Odoo        # PostgreSQL password
              ODOO_PORT: 8069                 # Odoo app port
              ODOO_DATA: /data/odoo/webdata   # Odoo webdata
              ODOO_ADDONS: /data/odoo/addons  # Odoo plugins
              ODOO_DBNAME: odoo               # Odoo database name
              ODOO_VERSION: 19.0              # Odoo image version

    # Odoo database
    pg-odoo:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-odoo
        pg_users:
          - { name: odoo    ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for odoo service }
          - { name: odoo_ro ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for odoo service  }
          - { name: odoo_rw ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for odoo service }
        pg_databases:
          - { name: odoo ,owner: odoo ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - { user: all ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # Full backup daily at 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # Global variables
    version: v4.0.0                   # Pigsty version string
    admin_ip: 10.10.10.10             # Admin node IP address
    region: default                   # Upstream mirror region: default|china|europe
    node_tune: oltp                   # Node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # PGSQL tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # Enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # Global proxy env for downloading packages & pulling docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345   # Add proxy env here for downloading packages or pulling images
      #https_proxy: 127.0.0.1:12345   # Usually format is http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                      # Domain names and upstream servers
      home  : { domain: i.pigsty }
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      odoo:                            # Nginx server config for odoo
        domain: odoo.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8069"   # Odoo service endpoint: IP:PORT
        websocket: true                # Add websocket support
        certbot: odoo.pigsty           # Certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------#
    # Credentials: MUST CHANGE THESE!
    #----------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

Basics

Check the configurable environment variables in the .env file:

# https://hub.docker.com/_/odoo#
PG_HOST=10.10.10.10
PG_PORT=5432
PG_USER=dbuser_odoo
PG_PASS=DBUser.Odoo
ODOO_PORT=8069

Then start Odoo with:

make up  # docker compose up

Access http://odoo.pigsty or http://10.10.10.10:8069

Makefile

make up         # Start Odoo with docker compose in minimal mode
make run        # Start Odoo with docker, local data directory and external PostgreSQL
make view       # Print Odoo access endpoints
make log        # tail -f Odoo logs
make info       # Inspect Odoo with jq
make stop       # Stop Odoo container
make clean      # Remove Odoo container
make pull       # Pull latest Odoo image
make rmi        # Remove Odoo image
make save       # Save Odoo image to /tmp/docker/odoo.tgz
make load       # Load Odoo image from /tmp/docker/odoo.tgz

Using External PostgreSQL

You can use external PostgreSQL for Odoo. Odoo will create its own database during setup, so you don’t need to do that.

pg_users: [ { name: dbuser_odoo ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: admin user for odoo database } ]
pg_databases: [ { name: odoo ,owner: dbuser_odoo ,revokeconn: true ,comment: odoo primary database } ]

Create the business user and database with:

bin/pgsql-user  pg-meta  dbuser_odoo
#bin/pgsql-db    pg-meta  odoo     # Odoo will create the database during setup

Check connectivity:

psql postgres://dbuser_odoo:[email protected]:5432/odoo

Expose Odoo Service

Expose the Odoo web service via Nginx portal:

    infra_portal:                     # Domain names and upstream servers
      home         : { domain: h.pigsty }
      odoo         : { domain: odoo.pigsty, endpoint: "127.0.0.1:8069", websocket: true }  # <------ Add this line
./infra.yml -t nginx   # Setup nginx infra portal

Odoo Addons

There are many Odoo modules available in the community. You can install them by downloading and placing them in the addons folder.

volumes:
  - ./addons:/mnt/extra-addons

You can mount the ./addons directory to /mnt/extra-addons in the container, then download and extract addons to the addons folder.

To enable addon modules, first enter Developer mode:

Settings -> General Settings -> Developer Tools -> Activate the developer mode

Then go to Apps -> Update Apps List, and you’ll find the extra addons available to install from the panel.

Frequently used free addons: Accounting Kit


Demo

Check the public demo: http://odoo.pigsty.io, username: [email protected], password: pigsty

If you want to access Odoo via SSL, you must trust files/pki/ca/ca.crt in your browser (or use the dirty hack thisisunsafe in Chrome).

7.3 - Dify: AI Workflow Platform

How to self-host the AI Workflow LLMOps platform — Dify, using external PostgreSQL, PGVector, and Redis for storage with Pigsty?

Dify is a Generative AI Application Innovation Engine and open-source LLM application development platform. It provides capabilities from Agent building to AI workflow orchestration, RAG retrieval, and model management, helping users easily build and operate generative AI native applications.

Pigsty provides support for self-hosted Dify, allowing you to deploy Dify with a single command while storing critical state in externally managed PostgreSQL. You can use pgvector as a vector database in the same PostgreSQL instance, further simplifying deployment.

Current Pigsty v4.0 supported Dify version: v1.8.1


Quick Start

On a fresh Linux x86/ARM server running a compatible operating system:

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./bootstrap                # Install Pigsty dependencies
./configure -c app/dify    # Use Dify configuration template
vi pigsty.yml              # Edit passwords, domains, keys, etc.

./deploy.yml               # Install Pigsty
./docker.yml               # Install Docker and Compose
./app.yml                  # Install Dify

Dify listens on port 5001 by default. Access http://<ip>:5001 in your browser and set up your initial user credentials to log in.

Once Dify starts, you can install various extensions, configure system models, and start using it!


Why Self-Host

There are many reasons to self-host Dify, but the primary motivation is data security. The Docker Compose template provided by Dify uses basic default database images, lacking enterprise features like high availability, disaster recovery, monitoring, IaC, and PITR capabilities.

Pigsty elegantly solves these issues for Dify, deploying all components with a single command based on configuration files and using mirrors to address China region access challenges. This makes Dify deployment and delivery very smooth. It handles PostgreSQL primary database, PGVector vector database, MinIO object storage, Redis, Prometheus monitoring, Grafana visualization, Nginx reverse proxy, and free HTTPS certificates all at once.

Pigsty ensures all Dify state is stored in externally managed services, including metadata in PostgreSQL and other data in the file system. Dify instances launched via Docker Compose become stateless applications that can be destroyed and rebuilt at any time, greatly simplifying operations.


Installation

Let’s start with single-node Dify deployment. We’ll cover production high-availability deployment methods later.

First, use Pigsty’s standard installation process to install the PostgreSQL instance required by Dify:

curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./bootstrap               # Prepare Pigsty dependencies
./configure -c app/dify   # Use Dify application template
vi pigsty.yml             # Edit configuration file, modify domains and passwords
./deploy.yml              # Install Pigsty and various databases

When you use the ./configure -c app/dify command, Pigsty automatically generates a configuration file based on the conf/app/dify.yml template and your current environment. You should modify passwords, domains, and other relevant parameters in the generated pigsty.yml configuration file according to your needs, then run ./deploy.yml to execute the standard installation process.

Next, run docker.yml to install Docker and Docker Compose, then use app.yml to complete Dify deployment:

./docker.yml              # Install Docker and Docker Compose
./app.yml                 # Deploy Dify stateless components with Docker

You can access the Dify Web admin interface at http://<your_ip_address>:5001 on your local network.

The first login will prompt you to set up default username, email, and password.

You can also use the locally resolved placeholder domain dify.pigsty, or follow the configuration below to use a real domain with an HTTPS certificate.


Configuration

When you use the ./configure -c app/dify command for configuration, Pigsty automatically generates a configuration file based on the conf/app/dify.yml template and your current environment. Here’s a detailed explanation of the default configuration:

---
#==============================================================#
# File      :   dify.yml
# Desc      :   pigsty config for running 1-node dify app
# Ctime     :   2025-02-24
# Mtime     :   2026-01-18
# Docs      :   https://pigsty.io/docs/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#
# Last Verified Dify Version: v1.8.1 on 2025-0908
# tutorial: https://pigsty.io/docs/app/dify
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/dify   # use this dify config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install dify with docker-compose
#
# To replace domain name:
#   sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml


all:
  children:

    # the dify application
    dify:
      hosts: { 10.10.10.10: {} }
      vars:
        app: dify   # specify app name to be installed (in the apps)
        apps:       # define all applications
          dify:     # app name, should have corresponding ~/pigsty/app/dify folder
            file:   # data directory to be created
              - { path: /data/dify ,state: directory ,mode: 0755 }
            conf:   # override /opt/dify/.env config file

              # change domain, mirror, proxy, secret key
              NGINX_SERVER_NAME: dify.pigsty
              # A secret key for signing and encryption, gen with `openssl rand -base64 42` (CHANGE PASSWORD!)
              SECRET_KEY: sk-somerandomkey
              # expose DIFY nginx service with port 5001 by default
              DIFY_PORT: 5001
              # where to store dify files? the default is ./volume, we'll use another volume created above
              DIFY_DATA: /data/dify

              # proxy and mirror settings
              #PIP_MIRROR_URL: https://pypi.tuna.tsinghua.edu.cn/simple
              #SANDBOX_HTTP_PROXY: http://10.10.10.10:12345
              #SANDBOX_HTTPS_PROXY: http://10.10.10.10:12345

              # database credentials
              DB_USERNAME: dify
              DB_PASSWORD: difyai123456
              DB_HOST: 10.10.10.10
              DB_PORT: 5432
              DB_DATABASE: dify
              VECTOR_STORE: pgvector
              PGVECTOR_HOST: 10.10.10.10
              PGVECTOR_PORT: 5432
              PGVECTOR_USER: dify
              PGVECTOR_PASSWORD: difyai123456
              PGVECTOR_DATABASE: dify
              PGVECTOR_MIN_CONNECTION: 2
              PGVECTOR_MAX_CONNECTION: 10

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dify ,password: difyai123456 ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: dify superuser }
        pg_databases:
          - { name: dify        ,owner: dify ,comment: dify main database  }
          - { name: dify_plugin ,owner: dify ,comment: dify plugin daemon database }
        pg_hba_rules:
          - { user: dify ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow dify access from local docker network' }
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home   :  { domain: i.pigsty }
      #minio :  { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      dify:                            # nginx server config for dify
        domain: dify.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5001"   # dify service endpoint: IP:PORT
        websocket: true                # add websocket support
        certbot: dify.pigsty           # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Checklist

Here’s a checklist of configuration items you need to pay attention to:

  • Hardware/Software: Prepare required machine resources: Linux x86_64/arm64 server, fresh installation of a mainstream Linux OS
  • Network/Permissions: SSH passwordless login access, user with sudo privileges without password
  • Ensure the machine has a static IPv4 network address on the internal network and can access the internet
  • If accessing via public network, ensure you have a domain pointing to the node’s public IP address
  • Ensure you use the app/dify configuration template and modify parameters as needed
    • configure -c app/dify, enter the node’s internal primary IP address, or specify via -i <primary_ip> command line parameter
  • Have you changed all password-related configuration parameters? [Optional]
  • Have you changed the PostgreSQL cluster business user password and application configurations using these passwords?
    • Default username dify and password difyai123456 are generated by Pigsty for Dify; modify according to your needs
    • In the Dify configuration block, modify DB_USERNAME, DB_PASSWORD, PGVECTOR_USER, PGVECTOR_PASSWORD accordingly
  • Have you changed Dify’s default encryption key?
    • You can randomly generate a password string with openssl rand -base64 42 and fill in the SECRET_KEY parameter
  • Have you changed the domain used by Dify?
    • Replace placeholder domain dify.pigsty with your actual domain, e.g., dify.pigsty.cc
    • You can use sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml to modify Dify’s domain

Domain and SSL

If you want to use a real domain with an HTTPS certificate, you need to modify the pigsty.yml configuration file:

  • The dify domain in the infra_portal parameter
  • It’s best to specify an email address certbot_email for certificate expiration notifications
  • Configure Dify’s NGINX_SERVER_NAME parameter to specify your actual domain
all:
  children:                            # Cluster definitions
    dify:                              # Dify group
      vars:                            # Dify group variables
        apps:                          # Application configuration
          dify:                        # Dify application definition
            conf:                      # Dify application configuration
              NGINX_SERVER_NAME: dify.pigsty

  vars:                                # Global parameters
    #certbot_sign: true                # Use Certbot for free HTTPS certificate
    certbot_email: [email protected]      # Email for certificate requests, for expiration notifications, optional
    infra_portal:                      # Configure Nginx servers
      dify:                            # Dify server definition
        domain: dify.pigsty            # Replace with your own domain here!
        endpoint: "10.10.10.10:5001"   # Specify Dify's IP and port here (auto-configured by default)
        websocket: true                # Dify requires websocket enabled
        certbot: dify.pigsty           # Specify Certbot certificate name

Use the following commands to request Nginx certificates:

# Request certificate, can also manually run /etc/nginx/sign-cert script
make cert

# The above Makefile shortcut actually runs the following playbook task:
./infra.yml -t nginx_certbot,nginx_reload -e certbot_sign=true

Run the app.yml playbook to redeploy Dify service for the NGINX_SERVER_NAME configuration to take effect:

./app.yml

File Backup

You can use restic to backup Dify’s file storage (default location /data/dify):

export RESTIC_REPOSITORY=/data/backups/dify   # Specify dify backup directory
export RESTIC_PASSWORD=some-strong-password   # Specify backup encryption password
mkdir -p ${RESTIC_REPOSITORY}                 # Create dify backup directory
restic init

After creating the Restic backup repository, you can backup Dify with:

export RESTIC_REPOSITORY=/data/backups/dify   # Specify dify backup directory
export RESTIC_PASSWORD=some-strong-password   # Specify backup encryption password

restic backup /data/dify                      # Backup /dify data directory to repository
restic snapshots                              # View backup snapshot list
restic restore -t /data/dify 0b11f778         # Restore snapshot xxxxxx to /data/dify
restic check                                  # Periodically check repository integrity

Another more reliable method is using JuiceFS to mount MinIO object storage to the /data/dify directory, allowing you to use MinIO/S3 for file state storage.

If you want to store all data in PostgreSQL, consider “storing file system data in PostgreSQL using JuiceFS”.

For example, you can create another dify_fs database and use it as JuiceFS metadata storage:

METAURL=postgres://dify:difyai123456@:5432/dify_fs
OPTIONS=(
  --storage postgres
  --bucket :5432/dify_fs
  --access-key dify
  --secret-key difyai123456
  ${METAURL}
  jfs
)
juicefs format "${OPTIONS[@]}"         # Create PG file system
juicefs mount ${METAURL} /data/dify -d # Mount to /data/dify directory in background
juicefs bench /data/dify               # Test performance
juicefs umount /data/dify              # Unmount

Reference

Dify Self-Hosting FAQ

7.4 - Enterprise Software

Enterprise-grade open source software templates

7.5 - NocoDB: Open-Source Airtable

Use NocoDB to transform PostgreSQL databases into smart spreadsheets, a no-code database application platform.

NocoDB is an open-source Airtable alternative that turns any database into a smart spreadsheet.

It provides a rich user interface that allows you to create powerful database applications without writing code. NocoDB supports PostgreSQL, MySQL, SQL Server, and more, making it ideal for building internal tools and data management systems.

Quick Start

Pigsty provides a Docker Compose configuration file for NocoDB in the software template directory:

cd ~/pigsty/app/nocodb

Review and modify the .env configuration file (adjust database connections as needed).

Start the service:

make up     # Start NocoDB with Docker Compose

Access NocoDB:

  • Default URL: http://nocodb.pigsty
  • Alternate URL: http://10.10.10.10:8080
  • First-time access requires creating an administrator account

Management Commands

Pigsty provides convenient Makefile commands to manage NocoDB:

make up      # Start NocoDB service
make run     # Start with Docker (connect to external PostgreSQL)
make view    # Display NocoDB access URL
make log     # View container logs
make info    # View service details
make stop    # Stop the service
make clean   # Stop and remove containers
make pull    # Pull the latest image
make rmi     # Remove NocoDB image
make save    # Save image to /tmp/nocodb.tgz
make load    # Load image from /tmp/nocodb.tgz

Connect to PostgreSQL

NocoDB can connect to PostgreSQL databases managed by Pigsty.

When adding a new project in the NocoDB interface, select “External Database” and enter the PostgreSQL connection information:

Host: 10.10.10.10
Port: 5432
Database Name: your_database
Username: your_username
Password: your_password
SSL: Disabled (or enable as needed)

After successful connection, NocoDB will automatically read the database table structure, and you can manage data through the visual interface.

Features

  • Smart Spreadsheet Interface: Excel/Airtable-like user experience
  • Multiple Views: Grid, form, kanban, calendar, gallery views
  • Collaboration Features: Team collaboration, permission management, comments
  • API Support: Auto-generated REST API
  • Integration Capabilities: Webhooks, Zapier integrations
  • Import/Export: CSV, Excel format support
  • Formulas and Validation: Complex data calculations and validation rules

Configuration

NocoDB configuration is in the .env file:

# Database connection (NocoDB metadata storage)
NC_DB=pg://postgres:[email protected]:5432/nocodb

# JWT secret (recommended to change)
NC_AUTH_JWT_SECRET=your-secret-key

# Other settings
NC_PUBLIC_URL=http://nocodb.pigsty
NC_DISABLE_TELE=true

Data Persistence

NocoDB metadata is stored by default in an external PostgreSQL database, and application data can also be stored in PostgreSQL.

If using local storage, data is saved in the /data/nocodb directory.

Security Recommendations

  1. Change Default Secret: Modify NC_AUTH_JWT_SECRET in the .env file
  2. Use Strong Passwords: Set strong passwords for administrator accounts
  3. Configure HTTPS: Enable HTTPS for production environments
  4. Restrict Access: Limit access through firewall or Nginx
  5. Regular Backups: Regularly back up the NocoDB metadata database

7.6 - Teable: AI No-Code Database

Build AI-powered no-code database applications with Teable to boost team productivity.

Teable is an AI-powered no-code database platform designed for team collaboration and automation.

Teable perfectly combines the power of databases with the ease of spreadsheets, integrating AI capabilities to help teams efficiently generate, automate, and collaborate on data.

Quick Start

Teable requires a complete Pigsty environment (including PostgreSQL, Redis, MinIO).

Prepare Environment

cd ~/pigsty
./bootstrap                # Prepare local repo and Ansible
./configure -c app/teable  # Important: modify default credentials!
./deploy.yml               # Install Pigsty, PostgreSQL, MinIO
./redis.yml                # Install Redis instance
./docker.yml               # Install Docker and Docker Compose
./app.yml                  # Install Teable with Docker Compose

Access Service

  • Default URL: http://teable.pigsty
  • Alternate URL: http://10.10.10.10:3000
  • First-time access requires registering an administrator account

Management Commands

Manage Teable in the Pigsty software template directory:

cd ~/pigsty/app/teable

make up      # Start Teable service
make down    # Stop Teable service
make log     # View container logs
make clean   # Clean up containers and data

Architecture

Teable depends on the following components:

  • PostgreSQL: Stores application data and metadata
  • Redis: Caching and session management
  • MinIO: Object storage (files, images, etc.)
  • Docker: Container runtime environment

Ensure these services are properly installed before deploying Teable.

Features

  • AI Integration: Built-in AI assistant for auto-generating data, formulas, and workflows
  • Smart Tables: Powerful table functionality with multiple field types
  • Automated Workflows: No-code automation to boost team efficiency
  • Multiple Views: Grid, form, kanban, calendar, and more
  • Team Collaboration: Real-time collaboration, permission management, comments
  • API and Integrations: Auto-generated API with Webhook support
  • Template Library: Rich application templates for quick project starts

Configuration

Teable configuration is managed through environment variables in docker-compose.yml:

# PostgreSQL connection
POSTGRES_HOST=10.10.10.10
POSTGRES_PORT=5432
POSTGRES_DB=teable
POSTGRES_USER=dbuser_teable
POSTGRES_PASSWORD=DBUser.Teable

# Redis connection
REDIS_HOST=10.10.10.10
REDIS_PORT=6379
REDIS_DB=0

# MinIO connection
MINIO_ENDPOINT=http://10.10.10.10:9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin

# Application configuration
BACKEND_URL=http://teable.pigsty
PUBLIC_ORIGIN=http://teable.pigsty

Important: In production environments, modify all default passwords and keys!

Data Persistence

Teable data persistence relies on:

  • PostgreSQL: All structured data stored in PostgreSQL
  • MinIO: Files, images, and other unstructured data stored in MinIO
  • Redis: Cache data (optional persistence)

Regularly back up the PostgreSQL database and MinIO buckets to ensure data safety.

Security Recommendations

  1. Change Default Credentials: Modify all default usernames and passwords in configuration files
  2. Enable HTTPS: Configure SSL certificates for production environments
  3. Configure Firewall: Restrict access to services
  4. Regular Backups: Regularly back up PostgreSQL and MinIO data
  5. Update Components: Keep Teable and dependent components up to date

7.7 - Gitea: Simple Self-Hosting Git Service

Launch the self-hosting Git service with Gitea and Pigsty managed PostgreSQL

Public Demo: http://git.pigsty.cc

TL;DR

cd ~/pigsty/app/gitea; make up

Pigsty use 8889 port for gitea by default

http://git.pigsty or http://10.10.10.10:8889

make up      # pull up gitea with docker-compose in minimal mode
make run     # launch gitea with docker , local data dir and external PostgreSQL
make view    # print gitea access point
make log     # tail -f gitea logs
make info    # introspect gitea with jq
make stop    # stop gitea container
make clean   # remove gitea container
make pull    # pull latest gitea image
make rmi     # remove gitea image
make save    # save gitea image to /tmp/gitea.tgz
make load    # load gitea image from /tmp

PostgreSQL Preparation

Gitea use built-in SQLite as default metadata storage, you can let Gitea use external PostgreSQL by setting connection string environment variable

# postgres://dbuser_gitea:[email protected]:5432/gitea
db:   { name: gitea, owner: dbuser_gitea, comment: gitea primary database }
user: { name: dbuser_gitea , password: DBUser.gitea, roles: [ dbrole_admin ] }

7.8 - Wiki.js: OSS Wiki Software

How to self-hosting your own wikipedia with Wiki.js and use Pigsty managed PostgreSQL as the backend database

Public Demo: http://wiki.pigsty.cc

TL; DR

cd app/wiki ; docker-compose up -d

Postgres Preparation

# postgres://dbuser_wiki:[email protected]:5432/wiki
- { name: wiki, owner: dbuser_wiki, revokeconn: true , comment: wiki the api gateway database }
- { name: dbuser_wiki, password: DBUser.Wiki , pgbouncer: true , roles: [ dbrole_admin ] }
bin/pgsql-user pg-meta dbuser_wiki
bin/pgsql-db   pg-meta wiki

Configuration

version: "3"
services:
  wiki:
    container_name: wiki
    image: requarks/wiki:2
    environment:
      DB_TYPE: postgres
      DB_HOST: 10.10.10.10
      DB_PORT: 5432
      DB_USER: dbuser_wiki
      DB_PASS: DBUser.Wiki
      DB_NAME: wiki
    restart: unless-stopped
    ports:
      - "9002:3000"

Access

  • Default Port for wiki: 9002
# add to nginx_upstream
- { name: wiki  , domain: wiki.pigsty.cc , endpoint: "127.0.0.1:9002"   }
./infra.yml -t nginx_config
ansible all -b -a 'nginx -s reload'

7.9 - Mattermost: Open-Source IM

Build a private team collaboration platform with Mattermost, the open-source Slack alternative.

Mattermost is an open-source team collaboration and messaging platform.

Mattermost provides instant messaging, file sharing, audio/video calls, and more. It’s an open-source alternative to Slack and Microsoft Teams, particularly suitable for enterprises requiring self-hosted deployment.

Quick Start

cd ~/pigsty/app/mattermost
make up     # Start Mattermost with Docker Compose

Access URL: http://mattermost.pigsty or http://10.10.10.10:8065

First-time access requires creating an administrator account.

Features

  • Instant Messaging: Personal and group chat
  • Channel Management: Public and private channels
  • File Sharing: Secure file storage and sharing
  • Audio/Video Calls: Built-in calling functionality
  • Integration Capabilities: Webhooks, Bots, and plugins support
  • Mobile Apps: iOS and Android clients
  • Enterprise-grade: SSO, LDAP, compliance features

Connect to PostgreSQL

Mattermost uses PostgreSQL for data storage. Configure the connection information:

MM_SQLSETTINGS_DRIVERNAME=postgres
MM_SQLSETTINGS_DATASOURCE=postgres://dbuser_mm:[email protected]:5432/mattermost

7.10 - Maybe: Personal Finance

Manage personal finances with Maybe, the open-source Mint/Personal Capital alternative.

Maybe is an open-source personal finance management application.

Maybe provides financial tracking, budget management, investment analysis, and more. It’s an open-source alternative to Mint and Personal Capital, giving you complete control over your financial data.

Quick Start

cd ~/pigsty/app/maybe
cp .env.example .env
vim .env                    # Must modify SECRET_KEY_BASE
make up                      # Start Maybe service

Access URL: http://maybe.pigsty or http://10.10.10.10:5002

Configuration

Configure in the .env file:

SECRET_KEY_BASE=your-secret-key-here    # Must modify!
DATABASE_URL=postgresql://...

Important: You must modify SECRET_KEY_BASE before first deployment!

Features

  • Account Management: Track multiple bank accounts and credit cards
  • Budget Planning: Set up and track budgets
  • Investment Analysis: Monitor portfolio performance
  • Bill Reminders: Automatic reminders for upcoming bills
  • Privacy-first: Data is completely under your control

7.11 - Metabase: BI Analytics Tool

Use Metabase for rapid business intelligence analysis with a user-friendly interface for team self-service data exploration.

Metabase is a fast, easy-to-use open-source business intelligence tool that lets your team explore and visualize data without SQL knowledge.

Metabase provides a friendly user interface with rich chart types and supports connecting to various databases, making it an ideal choice for enterprise data analysis.

Quick Start

Pigsty provides a Docker Compose configuration file for Metabase in the software template directory:

cd ~/pigsty/app/metabase

Review and modify the .env configuration file:

vim .env    # Check configuration, recommend changing default credentials

Start the service:

make up     # Start Metabase with Docker Compose

Access Metabase:

  • Default URL: http://metabase.pigsty
  • Alternate URL: http://10.10.10.10:3001
  • First-time access requires initial setup

Management Commands

Pigsty provides convenient Makefile commands to manage Metabase:

make up      # Start Metabase service
make run     # Start with Docker (connect to external PostgreSQL)
make view    # Display Metabase access URL
make log     # View container logs
make info    # View service details
make stop    # Stop the service
make clean   # Stop and remove containers
make pull    # Pull the latest image
make rmi     # Remove Metabase image
make save    # Save image to file
make load    # Load image from file

Connect to PostgreSQL

Metabase can connect to PostgreSQL databases managed by Pigsty.

During Metabase initialization or when adding a database, select “PostgreSQL” and enter the connection information:

Database Type: PostgreSQL
Name: Custom name (e.g., "Production Database")
Host: 10.10.10.10
Port: 5432
Database Name: your_database
Username: dbuser_meta
Password: DBUser.Meta

After successful connection, Metabase will automatically scan the database schema, and you can start creating questions and dashboards.

Features

  • No SQL Required: Build queries through visual interface
  • Rich Chart Types: Line, bar, pie, map charts, and more
  • Interactive Dashboards: Create beautiful data dashboards
  • Auto Refresh: Schedule data and dashboard updates
  • Permission Management: Fine-grained user and data access control
  • SQL Mode: Advanced users can write SQL directly
  • Embedding: Embed charts into other applications
  • Alerting: Automatic notifications on data changes

Configuration

Metabase configuration is in the .env file:

# Metabase metadata database (PostgreSQL recommended)
MB_DB_TYPE=postgres
MB_DB_DBNAME=metabase
MB_DB_PORT=5432
MB_DB_USER=dbuser_metabase
MB_DB_PASS=DBUser.Metabase
MB_DB_HOST=10.10.10.10

# Application configuration
JAVA_OPTS=-Xmx2g

Recommended: Use a dedicated PostgreSQL database for storing Metabase metadata.

Data Persistence

Metabase metadata (users, questions, dashboards, etc.) is stored in the configured database.

If using H2 database (default), data is saved in the /data/metabase directory. Using PostgreSQL as the metadata database is strongly recommended for production environments.

Performance Optimization

  • Use PostgreSQL: Replace the default H2 database
  • Increase Memory: Add JVM memory with JAVA_OPTS=-Xmx4g
  • Database Indexes: Create indexes for frequently queried fields
  • Result Caching: Enable Metabase query result caching
  • Scheduled Updates: Set reasonable dashboard auto-refresh frequency

Security Recommendations

  1. Change Default Credentials: Modify metadata database username and password
  2. Enable HTTPS: Configure SSL certificates for production
  3. Configure Authentication: Enable SSO or LDAP authentication
  4. Restrict Access: Limit access through firewall
  5. Regular Backups: Back up the Metabase metadata database

7.12 - Kong: the Nginx API Gateway

Learn how to deploy Kong, the API gateway, with Docker Compose and use external PostgreSQL as the backend database

TL;DR

cd app/kong ; docker-compose up -d
make up         # pull up kong with docker-compose
make ui         # run swagger ui container
make log        # tail -f kong logs
make info       # introspect kong with jq
make stop       # stop kong container
make clean      # remove kong container
make rmui       # remove swagger ui container
make pull       # pull latest kong image
make rmi        # remove kong image
make save       # save kong image to /tmp/kong.tgz
make load       # load kong image from /tmp

Scripts

  • Default Port: 8000
  • Default SSL Port: 8443
  • Default Admin Port: 8001
  • Default Postgres Database: postgres://dbuser_kong:[email protected]:5432/kong
# postgres://dbuser_kong:[email protected]:5432/kong
- { name: kong, owner: dbuser_kong, revokeconn: true , comment: kong the api gateway database }
- { name: dbuser_kong, password: DBUser.Kong , pgbouncer: true , roles: [ dbrole_admin ] }

7.13 - Registry: Container Image Mirror

Deploy Docker Registry mirror service to accelerate Docker image pulls, especially useful for users in China.

Docker Registry mirror service caches images from Docker Hub and other registries.

Particularly useful for users in China or regions with slow Docker Hub access, significantly reducing image pull times.

Quick Start

cd ~/pigsty/app/registry
make up     # Start Registry mirror service

Access URL: http://registry.pigsty or http://10.10.10.10:5000

Features

  • Image Caching: Cache images from Docker Hub and other registries
  • Web Interface: Optional image management UI
  • High Performance: Local caching dramatically improves pull speed
  • Storage Management: Configurable cleanup and management policies
  • Health Checks: Built-in health check endpoints

Configure Docker

Configure Docker to use the local mirror:

# Edit /etc/docker/daemon.json
{
  "registry-mirrors": ["http://10.10.10.10:5000"]
}

# Restart Docker
systemctl restart docker

Storage Management

Image data is stored in the /data/registry directory. Reserve at least 100GB of space.

7.14 - Database Tools

Database management and development tools

7.15 - ByteBase: PG Schema Migration

Self-hosting bytebase with PostgreSQL managed by Pigsty

ByteBase

ByteBase is a database schema change management tool. The following command will start a ByteBase on the meta node 8887 port by default.

mkdir -p /data/bytebase/data;
docker run --init --name bytebase --restart always --detach --publish 8887:8887 --volume /data/bytebase/data:/var/opt/bytebase \
    bytebase/bytebase:1.0.4 --data /var/opt/bytebase --host http://ddl.pigsty --port 8887

Then visit http://10.10.10.10:8887/ or http://ddl.pigsty to access bytebase console. You have to “Create Project”, “Env”, “Instance”, “Database” to perform schema migration.

Public Demo: http://ddl.pigsty.cc

Default username & password: admin / pigsty


Bytebase Overview

Schema Migrator for PostgreSQL

cd app/bytebase; make up

Visit http://ddl.pigsty or http://10.10.10.10:8887

make up         # pull up bytebase with docker-compose in minimal mode
make run        # launch bytebase with docker , local data dir and external PostgreSQL
make view       # print bytebase access point
make log        # tail -f bytebase logs
make info       # introspect bytebase with jq
make stop       # stop bytebase container
make clean      # remove bytebase container
make pull       # pull latest bytebase image
make rmi        # remove bytebase image
make save       # save bytebase image to /tmp/bytebase.tgz
make load       # load bytebase image from /tmp

PostgreSQL Preparation

Bytebase use its internal PostgreSQL database by default, You can use external PostgreSQL for higher durability.

# postgres://dbuser_bytebase:[email protected]:5432/bytebase
db:   { name: bytebase, owner: dbuser_bytebase, comment: bytebase primary database }
user: { name: dbuser_bytebase , password: DBUser.Bytebase, roles: [ dbrole_admin ] }

if you wish to user an external PostgreSQL, drop monitor extensions and views & pg_repack

DROP SCHEMA monitor CASCADE;
DROP EXTENSION pg_repack;

After bytebase initialized, you can create them back with /pg/tmp/pg-init-template.sql

psql bytebase < /pg/tmp/pg-init-template.sql

7.16 - PGAdmin4: PG Admin GUI Tool

Launch pgAdmin4 with docker, and load Pigsty server list into it

pgAdmin4 is a useful PostgreSQL management tool. Execute the following command to launch the pgadmin service on the admin node:

cd ~/pigsty/app/pgadmin ; docker-compose up -d

The default port for pgadmin is 8885, and you can access it through the following address:

http://adm.pigsty


Demo

Public Demo: http://adm.pigsty.cc

Credentials: [email protected] / pigsty

TL; DR

cd ~/pigsty/app/pgadmin   # enter docker compose dir
make up                   # launch pgadmin container
make conf view            # load pigsty server list

Shortcuts:

make up         # pull up pgadmin with docker-compose
make run        # launch pgadmin with docker
make view       # print pgadmin access point
make log        # tail -f pgadmin logs
make info       # introspect pgadmin with jq
make stop       # stop pgadmin container
make clean      # remove pgadmin container
make conf       # provision pgadmin with pigsty pg servers list
make dump       # dump servers.json from pgadmin container
make pull       # pull latest pgadmin image
make rmi        # remove pgadmin image
make save       # save pgadmin image to /tmp/pgadmin.tgz
make load       # load pgadmin image from /tmp

7.17 - PGWeb: Browser-based PG Client

Launch pgweb to access PostgreSQL via web browser

PGWEB: https://github.com/sosedoff/pgweb

Simple web-based and cross-platform PostgreSQL database explorer.

Public Demo: http://cli.pigsty.cc

TL; DR

cd ~/pigsty/app/pgweb ; make up

Visit http://cli.pigsty or http://10.10.10.10:8886

Try connecting with example URLs:

postgres://dbuser_meta:[email protected]:5432/meta?sslmode=disable
postgres://test:[email protected]:5432/test?sslmode=disable
make up         # pull up pgweb with docker compose
make run        # launch pgweb with docker
make view       # print pgweb access point
make log        # tail -f pgweb logs
make info       # introspect pgweb with jq
make stop       # stop pgweb container
make clean      # remove pgweb container
make pull       # pull latest pgweb image
make rmi        # remove pgweb image
make save       # save pgweb image to /tmp/docker/pgweb.tgz
make load       # load pgweb image from /tmp/docker/pgweb.tgz

7.18 - PostgREST: Generate REST API from Schema

Launch postgREST to generate REST API from PostgreSQL schema automatically

PostgREST is a binary component that automatically generates a REST API based on the PostgreSQL database schema.

For example, the following command will launch postgrest with docker (local port 8884, using default admin user, and expose Pigsty CMDB schema):

docker run --init --name postgrest --restart always --detach --publish 8884:8081 postgrest/postgrest

Visit http://10.10.10.10:8884 will show all auto-generated API definitions and automatically expose API documentation using Swagger Editor.

If you wish to perform CRUD operations and design more fine-grained permission control, please refer to Tutorial 1 - The Golden Key to generate a signed JWT.

This is an example of creating pigsty cmdb API with PostgREST

cd ~/pigsty/app/postgrest ; docker-compose up -d

http://10.10.10.10:8884 is the default endpoint for PostgREST

http://10.10.10.10:8883 is the default api docs for PostgREST

make up         # pull up postgrest with docker-compose
make run        # launch postgrest with docker
make ui         # run swagger ui container
make view       # print postgrest access point
make log        # tail -f postgrest logs
make info       # introspect postgrest with jq
make stop       # stop postgrest container
make clean      # remove postgrest container
make rmui       # remove swagger ui container
make pull       # pull latest postgrest image
make rmi        # remove postgrest image
make save       # save postgrest image to /tmp/postgrest.tgz
make load       # load postgrest image from /tmp

Swagger UI

Launch a swagger OpenAPI UI and visualize PostgREST API on 8883 with:

docker run --init --name postgrest --name swagger -p 8883:8080 -e API_URL=http://10.10.10.10:8884 swaggerapi/swagger-ui
# docker run -d -e API_URL=http://10.10.10.10:8884 -p 8883:8080 swaggerapi/swagger-editor # swagger editor

Check http://10.10.10.10:8883/

7.19 - Electric: PGLite Sync Engine

Use Electric to solve PostgreSQL data synchronization challenges with partial replication and real-time data transfer.

Electric is a PostgreSQL sync engine that solves complex data synchronization problems.

Electric supports partial replication, fan-out delivery, and efficient data transfer, making it ideal for building real-time and offline-first applications.

Quick Start

cd ~/pigsty/app/electric
make up     # Start Electric service

Access URL: http://electric.pigsty or http://10.10.10.10:3000

Features

  • Partial Replication: Sync only the data you need
  • Real-time Sync: Millisecond-level data updates
  • Offline-first: Work offline with automatic sync
  • Conflict Resolution: Automatic handling of data conflicts
  • Type Safety: TypeScript support

7.20 - Jupyter: AI Notebook & IDE

Run Jupyter Lab in container, and access PostgreSQL database

Run jupyter notebook with docker, you have to:

    1. change the default password in .env: JUPYTER_TOKEN
    1. create data dir with proper permission: make dir, owned by 1000:100
    1. make up to pull up jupyter with docker compose
cd ~/pigsty/app/jupyter ; make dir up

Visit http://lab.pigsty or http://10.10.10.10:8888, the default password is pigsty

Prepare

Create a data directory /data/jupyter, with the default uid & gid 1000:100:

make dir   # mkdir -p /data/jupyter; chown -R 1000:100 /data/jupyter

Connect to Postgres

Use the jupyter terminal to install psycopg2-binary & psycopg2 package.

pip install psycopg2-binary psycopg2

# install with a mirror
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple psycopg2-binary psycopg2

pip install --upgrade pip
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

Or installation with conda:

conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/

then use the driver in your notebook

import psycopg2

conn = psycopg2.connect('postgres://dbuser_dba:[email protected]:5432/meta')
cursor = conn.cursor()
cursor.execute('SELECT * FROM pg_stat_activity')
for i in cursor.fetchall():
    print(i)

Alias

make up         # pull up jupyter with docker compose
make dir        # create required /data/jupyter and set owner
make run        # launch jupyter with docker
make view       # print jupyter access point
make log        # tail -f jupyter logs
make info       # introspect jupyter with jq
make stop       # stop jupyter container
make clean      # remove jupyter container
make pull       # pull latest jupyter image
make rmi        # remove jupyter image
make save       # save jupyter image to /tmp/docker/jupyter.tgz
make load       # load jupyter image from /tmp/docker/jupyter.tgz

7.21 - Data Applications

PostgreSQL-based data visualization applications

7.22 - PGLOG: PostgreSQL Log Analysis Application

A sample Applet included with Pigsty for analyzing PostgreSQL CSV log samples

PGLOG is a sample application included with Pigsty that uses the pglog.sample table in MetaDB as its data source. You simply need to load logs into this table, then access the related dashboard.

Pigsty provides convenient commands for pulling CSV logs and loading them into the sample table. On the meta node, the following shortcut commands are available by default:

catlog  [node=localhost]  [date=today]   # Print CSV log to stdout
pglog                                    # Load CSVLOG from stdin
pglog12                                  # Load PG12 format CSVLOG
pglog13                                  # Load PG13 format CSVLOG
pglog14                                  # Load PG14 format CSVLOG (=pglog)

catlog | pglog                       # Analyze current node's log for today
catlog node-1 '2021-07-15' | pglog   # Analyze node-1's csvlog for 2021-07-15

Next, you can access the following links to view the sample log analysis interface.

  • PGLOG Overview: Present the entire CSV log sample details, aggregated by multiple dimensions.

  • PGLOG Session: Present detailed information about a specific connection in the log sample.

The catlog command pulls CSV database logs from a specific node for a specific date and writes to stdout

By default, catlog pulls logs from the current node for today. You can specify the node and date through parameters.

Using pglog and catlog together, you can quickly pull database CSV logs for analysis.

catlog | pglog                       # Analyze current node's log for today
catlog node-1 '2021-07-15' | pglog   # Analyze node-1's csvlog for 2021-07-15

7.23 - NOAA ISD Global Weather Station Historical Data Query

Demonstrate how to import data into a database using the ISD dataset as an example

If you have a database and don’t know what to do with it, why not try this open-source project: Vonng/isd

You can directly reuse the monitoring system Grafana to interactively browse sub-hourly meteorological data from nearly 30,000 surface weather stations over the past 120 years.

This is a fully functional data application that can query meteorological observation records from 30,000 global surface weather stations since 1901.

Project URL: https://github.com/Vonng/isd

Online Demo: https://demo.pigsty.io/d/isd-overview

isd-overview.jpg

Quick Start

Clone this repository

git clone https://github.com/Vonng/isd.git; cd isd;

Prepare a PostgreSQL instance

The PostgreSQL instance should have the PostGIS extension enabled. Use the PGURL environment variable to pass database connection information:

# Pigsty uses dbuser_dba as the default admin account with password DBUser.DBA
export PGURL=postgres://dbuser_dba:[email protected]:5432/meta?sslmode=disable
psql "${PGURL}" -c 'SELECT 1'  # Check if connection is available

Fetch and import ISD weather station metadata

This is a daily-updated weather station metadata file containing station longitude/latitude, elevation, name, country, province, and other information. Use the following command to download and import:

make reload-station   # Equivalent to downloading the latest station data then loading: get-station + load-station

Fetch and import the latest isd.daily data

isd.daily is a daily-updated dataset containing daily observation data summaries from global weather stations. Use the following command to download and import. Note that raw data downloaded directly from the NOAA website needs to be parsed before it can be loaded into the database, so you need to download or build an ISD data parser.

make get-parser       # Download the parser binary from Github, or you can build directly with go using make build
make reload-daily     # Download and import the latest isd.daily data for this year into the database

Load pre-parsed CSV dataset

The ISD Daily dataset has some dirty data and duplicate data. If you don’t want to manually parse and clean it, a stable pre-parsed CSV dataset is also provided here.

This dataset contains isd.daily data up to 2023-06-24. You can download and import it directly into PostgreSQL without needing a parser.

make get-stable       # Get the stable isd.daily historical dataset from Github
make load-stable      # Load the downloaded stable historical dataset into the PostgreSQL database

More Data

Two parts of the ISD dataset are updated daily: weather station metadata and the latest year’s isd.daily (e.g., the 2023 tarball).

You can use the following command to download and refresh these two parts. If the dataset hasn’t been updated, these commands won’t re-download the same data package:

make reload           # Actually: reload-station + reload-daily

You can also use the following commands to download and load isd.daily data for a specific year:

bin/get-daily  2022                   # Get daily weather observation summary for 2022 (1900-2023)
bin/load-daily "${PGURL}" 2022        # Load daily weather observation summary for 2022 (1900-2023)

In addition to the daily summary isd.daily, ISD also provides more detailed sub-hourly raw observation records isd.hourly. The download and load methods are similar:

bin/get-hourly  2022                  # Download hourly observation records for a specific year (e.g., 2022, options 1900-2023)
bin/load-hourly "${PGURL}" 2022       # Load hourly observation records for a specific year

Data

Dataset Overview

ISD provides four datasets: sub-hourly raw observation data, daily statistical summary data, monthly statistical summary, and yearly statistical summary

DatasetNotes
ISD HourlySub-hourly observation records
ISD DailyDaily statistical summary
ISD MonthlyNot used, can be calculated from isd.daily
ISD YearlyNot used, can be calculated from isd.daily

Daily Summary Dataset

  • Compressed package size 2.8GB (as of 2023-06-24)
  • Table size 24GB, index size 6GB, total size approximately 30GB in PostgreSQL
  • If timescaledb compression is enabled, total size can be compressed to 4.5 GB

Sub-hourly Observation Data

  • Total compressed package size 117GB
  • After loading into database: table size 1TB+, index size 600GB+, total size 1.6TB

Database Schema

Weather Station Metadata Table

CREATE TABLE isd.station
(
    station    VARCHAR(12) PRIMARY KEY,
    usaf       VARCHAR(6) GENERATED ALWAYS AS (substring(station, 1, 6)) STORED,
    wban       VARCHAR(5) GENERATED ALWAYS AS (substring(station, 7, 5)) STORED,
    name       VARCHAR(32),
    country    VARCHAR(2),
    province   VARCHAR(2),
    icao       VARCHAR(4),
    location   GEOMETRY(POINT),
    longitude  NUMERIC GENERATED ALWAYS AS (Round(ST_X(location)::NUMERIC, 6)) STORED,
    latitude   NUMERIC GENERATED ALWAYS AS (Round(ST_Y(location)::NUMERIC, 6)) STORED,
    elevation  NUMERIC,
    period     daterange,
    begin_date DATE GENERATED ALWAYS AS (lower(period)) STORED,
    end_date   DATE GENERATED ALWAYS AS (upper(period)) STORED
);

Daily Summary Table

CREATE TABLE IF NOT EXISTS isd.daily
(
    station     VARCHAR(12) NOT NULL, -- station number 6USAF+5WBAN
    ts          DATE        NOT NULL, -- observation date
    -- Temperature & Dew Point
    temp_mean   NUMERIC(3, 1),        -- mean temperature ℃
    temp_min    NUMERIC(3, 1),        -- min temperature ℃
    temp_max    NUMERIC(3, 1),        -- max temperature ℃
    dewp_mean   NUMERIC(3, 1),        -- mean dew point ℃
    -- Air Pressure
    slp_mean    NUMERIC(5, 1),        -- sea level pressure (hPa)
    stp_mean    NUMERIC(5, 1),        -- station pressure (hPa)
    -- Visibility
    vis_mean    NUMERIC(6),           -- visible distance (m)
    -- Wind Speed
    wdsp_mean   NUMERIC(4, 1),        -- average wind speed (m/s)
    wdsp_max    NUMERIC(4, 1),        -- max wind speed (m/s)
    gust        NUMERIC(4, 1),        -- max wind gust (m/s)
    -- Precipitation / Snow Depth
    prcp_mean   NUMERIC(5, 1),        -- precipitation (mm)
    prcp        NUMERIC(5, 1),        -- rectified precipitation (mm)
    sndp        NuMERIC(5, 1),        -- snow depth (mm)
    -- FRSHTT (Fog/Rain/Snow/Hail/Thunder/Tornado)
    is_foggy    BOOLEAN,              -- (F)og
    is_rainy    BOOLEAN,              -- (R)ain or Drizzle
    is_snowy    BOOLEAN,              -- (S)now or pellets
    is_hail     BOOLEAN,              -- (H)ail
    is_thunder  BOOLEAN,              -- (T)hunder
    is_tornado  BOOLEAN,              -- (T)ornado or Funnel Cloud
    -- Record counts used for statistical aggregation
    temp_count  SMALLINT,             -- record count for temp
    dewp_count  SMALLINT,             -- record count for dew point
    slp_count   SMALLINT,             -- record count for sea level pressure
    stp_count   SMALLINT,             -- record count for station pressure
    wdsp_count  SMALLINT,             -- record count for wind speed
    visib_count SMALLINT,             -- record count for visible distance
    -- Temperature flags
    temp_min_f  BOOLEAN,              -- aggregate min temperature
    temp_max_f  BOOLEAN,              -- aggregate max temperature
    prcp_flag   CHAR,                 -- precipitation flag: ABCDEFGHI
    PRIMARY KEY (station, ts)
); -- PARTITION BY RANGE (ts);

Sub-hourly Raw Observation Data Table

ISD Hourly
CREATE TABLE IF NOT EXISTS isd.hourly
(
    station    VARCHAR(12) NOT NULL, -- station id
    ts         TIMESTAMP   NOT NULL, -- timestamp
    -- air
    temp       NUMERIC(3, 1),        -- [-93.2,+61.8]
    dewp       NUMERIC(3, 1),        -- [-98.2,+36.8]
    slp        NUMERIC(5, 1),        -- [8600,10900]
    stp        NUMERIC(5, 1),        -- [4500,10900]
    vis        NUMERIC(6),           -- [0,160000]
    -- wind
    wd_angle   NUMERIC(3),           -- [1,360]
    wd_speed   NUMERIC(4, 1),        -- [0,90]
    wd_gust    NUMERIC(4, 1),        -- [0,110]
    wd_code    VARCHAR(1),           -- code that denotes the character of the WIND-OBSERVATION.
    -- cloud
    cld_height NUMERIC(5),           -- [0,22000]
    cld_code   VARCHAR(2),           -- cloud code
    -- water
    sndp       NUMERIC(5, 1),        -- mm snow
    prcp       NUMERIC(5, 1),        -- mm precipitation
    prcp_hour  NUMERIC(2),           -- precipitation duration in hour
    prcp_code  VARCHAR(1),           -- precipitation type code
    -- sky
    mw_code    VARCHAR(2),           -- manual weather observation code
    aw_code    VARCHAR(2),           -- auto weather observation code
    pw_code    VARCHAR(1),           -- weather code of past period of time
    pw_hour    NUMERIC(2),           -- duration of pw_code period
    -- misc
    -- remark     TEXT,
    -- eqd        TEXT,
    data       JSONB                 -- extra data
) PARTITION BY RANGE (ts);

Parser

The raw data provided by NOAA ISD is in a highly compressed proprietary format that needs to be processed through a parser before it can be converted into database table format.

For the Daily and Hourly datasets, two parsers are provided here: isdd and isdh. Both parsers take annual data compressed packages as input, produce CSV results as output, and work in pipeline mode as shown below:

NAME
        isd -- Intergrated Surface Dataset Parser

SYNOPSIS
        isd daily   [-i <input|stdin>] [-o <output|stout>] [-v]
        isd hourly  [-i <input|stdin>] [-o <output|stout>] [-v] [-d raw|ts-first|hour-first]

DESCRIPTION
        The isd program takes noaa isd daily/hourly raw tarball data as input.
        and generate parsed data in csv format as output. Works in pipe mode

        cat data/daily/2023.tar.gz | bin/isd daily -v | psql ${PGURL} -AXtwqc "COPY isd.daily FROM STDIN CSV;"

        isd daily  -v -i data/daily/2023.tar.gz  | psql ${PGURL} -AXtwqc "COPY isd.daily FROM STDIN CSV;"
        isd hourly -v -i data/hourly/2023.tar.gz | psql ${PGURL} -AXtwqc "COPY isd.hourly FROM STDIN CSV;"

OPTIONS
        -i  <input>     input file, stdin by default
        -o  <output>    output file, stdout by default
        -p  <profpath>  pprof file path, enable if specified
        -d              de-duplicate rows for hourly dataset (raw, ts-first, hour-first)
        -v              verbose mode
        -h              print help

User Interface

Several dashboards made with Grafana are provided here for exploring the ISD dataset and querying weather stations and historical meteorological data.


ISD Overview

Global overview with overall metrics and weather station navigation.

isd-overview.jpg


ISD Country

Display all weather stations within a single country/region.

isd-country.jpg


ISD Station

Display detailed information for a single weather station, including metadata and daily/monthly/yearly summary metrics.

ISD Station Dashboard

isd-station.jpg


ISD Detail

Display raw sub-hourly observation metric data for a weather station, requires the isd.hourly dataset.

ISD Station Dashboard

isd-detail.jpg




7.24 - WHO COVID-19 Pandemic Dashboard

A sample Applet included with Pigsty for visualizing World Health Organization official pandemic data

Covid is a sample Applet included with Pigsty for visualizing the World Health Organization’s official pandemic data dashboard.

You can browse COVID-19 infection and death cases for each country and region, as well as global pandemic trends.


Overview

GitHub Repository: https://github.com/pgsty/pigsty-app/tree/master/covid

Online Demo: https://demo.pigsty.io/d/covid


Installation

Enter the application directory on the admin node and execute make to complete the installation.

make            # Complete all configuration

Other sub-tasks:

make reload     # download latest data and pour it again
make ui         # install grafana dashboards
make sql        # install database schemas
make download   # download latest data
make load       # load downloaded data into database
make reload     # download latest data and pour it into database

7.25 - StackOverflow Global Developer Survey

Analyze database-related data from StackOverflow’s global developer survey over the past seven years

Overview

GitHub Repository: https://github.com/pgsty/pigsty-app/tree/master/db

Online Demo: https://demo.pigsty.io/d/sf-survey

7.26 - DB-Engines Database Popularity Trend Analysis

Analyze database management systems on DB-Engines and browse their popularity evolution

Overview

GitHub Repository: https://github.com/pgsty/pigsty-app/tree/master/db

Online Demo: https://demo.pigsty.io/d/db-engine

7.27 - AWS & Aliyun Server Pricing

Analyze compute and storage pricing on Aliyun / AWS (ECS/ESSD)

Overview

GitHub Repository: https://github.com/pgsty/pigsty-app/tree/master/cloud

Online Demo: https://demo.pigsty.io/d/ecs

Article: Analyzing Computing Costs: Has Aliyun Really Reduced Prices?

Data Source

Aliyun ECS pricing can be obtained as raw CSV data from Price Calculator - Pricing Details - Price Download.

Schema

Download Aliyun pricing details and import for analysis

CREATE EXTENSION file_fdw;
CREATE SERVER fs FOREIGN DATA WRAPPER file_fdw;

DROP FOREIGN TABLE IF EXISTS aliyun_ecs CASCADE;
CREATE FOREIGN TABLE aliyun_ecs
    (
        "region" text,
        "system" text,
        "network" text,
        "isIO" bool,
        "instanceId" text,
        "hourlyPrice" numeric,
        "weeklyPrice" numeric,
        "standard" numeric,
        "monthlyPrice" numeric,
        "yearlyPrice" numeric,
        "2yearPrice" numeric,
        "3yearPrice" numeric,
        "4yearPrice" numeric,
        "5yearPrice" numeric,
        "id" text,
        "instanceLabel" text,
        "familyId" text,
        "serverType" text,
        "cpu" text,
        "localStorage" text,
        "NvmeSupport" text,
        "InstanceFamilyLevel" text,
        "EniTrunkSupported" text,
        "InstancePpsRx" text,
        "GPUSpec" text,
        "CpuTurboFrequency" text,
        "InstancePpsTx" text,
        "InstanceTypeId" text,
        "GPUAmount" text,
        "InstanceTypeFamily" text,
        "SecondaryEniQueueNumber" text,
        "EniQuantity" text,
        "EniPrivateIpAddressQuantity" text,
        "DiskQuantity" text,
        "EniIpv6AddressQuantity" text,
        "InstanceCategory" text,
        "CpuArchitecture" text,
        "EriQuantity" text,
        "MemorySize" numeric,
        "EniTotalQuantity" numeric,
        "PhysicalProcessorModel" text,
        "InstanceBandwidthRx" numeric,
        "CpuCoreCount" numeric,
        "Generation" text,
        "CpuSpeedFrequency" numeric,
        "PrimaryEniQueueNumber" text,
        "LocalStorageCategory" text,
        "InstanceBandwidthTx" text,
        "TotalEniQueueQuantity" text
        ) SERVER fs OPTIONS ( filename '/tmp/aliyun-ecs.csv', format 'csv',header 'true');

Similarly for AWS EC2, you can download the price list from Vantage:


DROP FOREIGN TABLE IF EXISTS aws_ec2 CASCADE;
CREATE FOREIGN TABLE aws_ec2
    (
        "name" TEXT,
        "id" TEXT,
        "Memory" TEXT,
        "vCPUs" TEXT,
        "GPUs" TEXT,
        "ClockSpeed" TEXT,
        "InstanceStorage" TEXT,
        "NetworkPerformance" TEXT,
        "ondemand" TEXT,
        "reserve" TEXT,
        "spot" TEXT
        ) SERVER fs OPTIONS ( filename '/tmp/aws-ec2.csv', format 'csv',header 'true');



DROP VIEW IF EXISTS ecs;
CREATE VIEW ecs AS
SELECT "region"                                       AS region,
       "id"                                           AS id,
       "instanceLabel"                                AS name,
       "familyId"                                     AS family,
       "CpuCoreCount"                                 AS cpu,
       "MemorySize"                                   AS mem,
       round("5yearPrice" / "CpuCoreCount" / 60, 2)   AS ycm5, -- ¥ / (core·month)
       round("4yearPrice" / "CpuCoreCount" / 48, 2)   AS ycm4, -- ¥ / (core·month)
       round("3yearPrice" / "CpuCoreCount" / 36, 2)   AS ycm3, -- ¥ / (core·month)
       round("2yearPrice" / "CpuCoreCount" / 24, 2)   AS ycm2, -- ¥ / (core·month)
       round("yearlyPrice" / "CpuCoreCount" / 12, 2)  AS ycm1, -- ¥ / (core·month)
       round("standard" / "CpuCoreCount", 2)          AS ycmm, -- ¥ / (core·month)
       round("hourlyPrice" / "CpuCoreCount" * 720, 2) AS ycmh, -- ¥ / (core·month)
       "CpuSpeedFrequency"::NUMERIC                   AS freq,
       "CpuTurboFrequency"::NUMERIC                   AS freq_turbo,
       "Generation"                                   AS generation
FROM aliyun_ecs
WHERE system = 'linux';

DROP VIEW IF EXISTS ec2;
CREATE VIEW ec2 AS
SELECT id,
       name,
       split_part(id, '.', 1)                                                               as family,
       split_part(id, '.', 2)                                                               as spec,
       (regexp_match(split_part(id, '.', 1), '^[a-zA-Z]+(\d)[a-z0-9]*'))[1]                 as gen,
       regexp_substr("vCPUs", '^[0-9]+')::int                                               as cpu,
       regexp_substr("Memory", '^[0-9]+')::int                                              as mem,
       CASE spot
           WHEN 'unavailable' THEN NULL
           ELSE round((regexp_substr("spot", '([0-9]+.[0-9]+)')::NUMERIC * 7.2), 2) END     AS spot,
       CASE ondemand
           WHEN 'unavailable' THEN NULL
           ELSE round((regexp_substr("ondemand", '([0-9]+.[0-9]+)')::NUMERIC * 7.2), 2) END AS ondemand,
       CASE reserve
           WHEN 'unavailable' THEN NULL
           ELSE round((regexp_substr("reserve", '([0-9]+.[0-9]+)')::NUMERIC * 7.2), 2) END  AS reserve,
       "ClockSpeed"                                                                         AS freq
FROM aws_ec2;

Visualization

8 - Conf Templates

Batteries-included configuration templates for specific scenarios, with detailed explanations.

Pigsty provides various ready-to-use configuration templates for different deployment scenarios.

You can specify a configuration template with the -c option during configure. If no template is specified, the default meta template is used.

CategoryTemplates
Solo Templatesmeta, rich, fat, slim, infra, vibe
Kernel Templatespgsql, citus, mssql, polar, ivory, mysql, pgtde, oriole, supabase
HA Templatesha/simu, ha/full, ha/safe, ha/trio, ha/dual
App Templatesapp/odoo, app/dify, app/electric, app/maybe, app/teable, app/registry
Misc Templatesdemo/el, demo/debian, demo/demo, demo/minio, build/oss, build/pro

8.1 - Solo Templates

8.2 - meta

Default single-node installation template with extensive configuration parameter descriptions

The meta configuration template is Pigsty’s default template, designed to fulfill Pigsty’s core functionality—deploying PostgreSQL—on a single node.

To maximize compatibility, meta installs only the minimum required software set to ensure it runs across all operating system distributions and architectures.


Overview

  • Config Name: meta
  • Node Count: Single node
  • Description: Default single-node installation template with extensive configuration parameter descriptions and minimum required feature set.
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

Usage: This is the default config template, so there’s no need to specify -c meta explicitly during configure:

./configure [-i <primary_ip>]

For example, if you want to install PostgreSQL 17 rather than the default 18, you can use the -v arg in configure:

./configure -v 17   # or 16,15,14,13....

Content

Source: pigsty/conf/meta.yml

---
#==============================================================#
# File      :   meta.yml
# Desc      :   Pigsty default 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2026-01-22
# Docs      :   https://pigsty.io/docs/conf/meta
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the default 1-node configuration template, with:
# INFRA, NODE, PGSQL, ETCD, MINIO, DOCKER, APP (pgadmin)
# with basic pg extensions: postgis, pgvector
#
# Work with PostgreSQL 14-18 on all supported platform
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://pigsty.io/docs/pgsql/ext/
        pg_extensions: [ postgis, pgvector ]

        # define business users/roles : https://pigsty.io/docs/pgsql/config/user
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }

        # define business databases : https://pigsty.io/docs/pgsql/config/db
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: "pigsty meta database"
            schemas: [pigsty]
            # define extensions in database : https://pigsty.io/docs/pgsql/ext/create
            extensions: [ postgis, vector ]

        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false   # disable in 1-node mode :  https://pigsty.io/docs/infra/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://pigsty.io/docs/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://pigsty.io/docs/minio
    #----------------------------------------------#
    #minio:
    #  hosts:
    #    10.10.10.10: { minio_seq: 1 }
    #  vars:
    #    minio_cluster: minio
    #    minio_users:                      # list of minio user to be created
    #      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
    #      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
    #      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://pigsty.io/docs/docker
    # APP    : https://pigsty.io/docs/app
    #----------------------------------------------#
    # launch example pgadmin app with: ./app.yml (http://10.10.10.10:8885 [email protected] / pigsty)
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]
              PGADMIN_DEFAULT_PASSWORD: pigsty


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      pgadmin : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      #minio  : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    node_repo_modules: 'node,infra,pgsql' # add these repos directly to the singleton node
    #node_repo_modules: local             # use this if you want to build & user local repo
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with the latest version

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://pigsty.io/docs/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    #pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    #pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
    #  local:                          # default pgbackrest repo with local posix fs
    #    path: /pg/backup              # local backup directory, `/pg/backup` by default
    #    retention_full_type: count    # retention full backups by count
    #    retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
    #  minio:                          # optional minio repo for pgbackrest
    #    type: s3                      # minio is s3-compatible, so s3 is used
    #    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    #    s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
    #    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    #    s3_key: pgbackrest            # minio user access key for pgbackrest
    #    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    #    s3_uri_style: path            # use path style uri for minio rather than host style
    #    path: /pgbackrest             # minio backup path, default is `/pgbackrest`
    #    storage_port: 9000            # minio port, 9000 by default
    #    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    #    block: y                      # Enable block incremental backup
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days
    #  s3: # any s3 compatible service is fine
    #    type: s3
    #    s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
    #    s3_region: oss-cn-beijing
    #    s3_bucket: <your_bucket_name>
    #    s3_key: <your_access_key>
    #    s3_key_secret: <your_secret_key>
    #    s3_uri_style: host
    #    path: /pgbackrest
    #    bundle: y                     # bundle small files into a single file
    #    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    #    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    #    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    #    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    #    retention_full_type: time     # retention full backup by time on minio repo
    #    retention_full: 14            # keep full backup for last 14 days

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The meta template is Pigsty’s default getting-started configuration, designed for quick onboarding.

Use Cases:

  • First-time Pigsty users
  • Quick deployment in development and testing environments
  • Small production environments running on a single machine
  • As a base template for more complex deployments

Key Features:

  • Online installation mode without building local software repository (repo_enabled: false)
  • Default installs PostgreSQL 18 with postgis and pgvector extensions
  • Includes complete monitoring infrastructure (Grafana, Prometheus, Loki, etc.)
  • Preconfigured Docker and pgAdmin application examples
  • MinIO backup storage disabled by default, can be enabled as needed

Notes:

  • Default passwords are sample passwords; must be changed for production environments
  • Single-node etcd has no high availability guarantee, suitable for development and testing
  • If you need to build a local software repository, use the rich template

8.3 - rich

Feature-rich single-node configuration with local software repository, all extensions, MinIO backup, and complete examples

The rich configuration template is an enhanced version of meta, designed for users who need to experience complete functionality.

If you want to build a local software repository, use MinIO for backup storage, run Docker applications, or need preconfigured business databases, use this template.


Overview

  • Config Name: rich
  • Node Count: Single node
  • Description: Feature-rich single-node configuration, adding local software repository, MinIO backup, complete extensions, Docker application examples on top of meta
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

This template’s main enhancements over meta:

  • Builds local software repository (repo_enabled: true), downloads all PG extensions
  • Enables single-node MinIO as PostgreSQL backup storage
  • Preinstalls TimescaleDB, pgvector, pg_wait_sampling and other extensions
  • Includes detailed user/database/service definition comment examples
  • Adds Redis primary-replica instance example
  • Preconfigures pg-test three-node HA cluster configuration stub

Usage:

./configure -c rich [-i <primary_ip>]

Content

Source: pigsty/conf/rich.yml

---
#==============================================================#
# File      :   rich.yml
# Desc      :   Pigsty feature-rich 1-node online install config
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/conf/rich
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the enhanced version of default meta.yml, which has:
# - almost all available postgres extensions
# - build local software repo for entire env
# - 1 node minio used as central backup repo
# - cluster stub for 3-node pg-test / ferret / redis
# - stub for nginx, certs, and website self-hosting config
# - detailed comments for database / user / service
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c rich
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://pigsty.io/docs/pgsql/ext/
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://pigsty.io/docs/pgsql/config/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://pigsty.io/docs/pgsql/config/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # define (OPTIONAL) L2 VIP that bind to primary
        #pg_vip_enabled: true
        #pg_vip_address: 10.10.10.2/24
        #pg_vip_interface: eth1

    #----------------------------------------------#
    # PGSQL HA Cluster Example: 3-node pg-test
    #----------------------------------------------#
    #pg-test:
    #  hosts:
    #    10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
    #    10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
    #    10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
    #  vars:
    #    pg_cluster: pg-test           # define pgsql cluster name
    #    pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
    #    pg_databases: [{ name: test }]
    #    # define business service here: https://pigsty.io/docs/pgsql/service
    #    pg_services:                        # extra services in addition to pg_default_services, array of service definition
    #      # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
    #      - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
    #        port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
    #        ip: "*"                         # optional, service bind ip address, `*` for all ip by default
    #        selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
    #        dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
    #        check: /sync                    # optional, health check url path, / by default
    #        backup: "[? pg_role == `primary`]"  # backup server selector
    #        maxconn: 3000                   # optional, max allowed front-end connection
    #        balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
    #        options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'
    #    pg_vip_enabled: true
    #    pg_vip_address: 10.10.10.3/24
    #    pg_vip_interface: eth1
    #    pg_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
    #      - '00 01 * * 1 /pg/bin/pg-backup full'
    #      - '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'

    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true    # build local repo, and install everything from it:  https://pigsty.io/docs/infra/admin/repo
        # and download all extensions into local repo
        repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # ETCD : https://pigsty.io/docs/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://pigsty.io/docs/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://pigsty.io/docs/docker
    # APP    : https://pigsty.io/docs/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"

    #----------------------------------------------#
    # REDIS : https://pigsty.io/docs/redis
    #----------------------------------------------#
    # OPTIONAL, launch redis clusters with: ./redis.yml
    redis-ms:
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }



  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # infra services exposed via portal
      home      : { domain: i.pigsty }     # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      minio     : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - '${admin_ip} i.pigsty sss.pigsty'
      - '${admin_ip} adm.pigsty ddl.pigsty repo.pigsty supa.pigsty'
    node_repo_modules: local              # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ]                 # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://pigsty.io/docs/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
...

Explanation

The rich template is Pigsty’s complete functionality showcase configuration, suitable for users who want to deeply experience all features.

Use Cases:

  • Offline environments requiring local software repository
  • Environments needing MinIO as PostgreSQL backup storage
  • Pre-planning multiple business databases and users
  • Running Docker applications (pgAdmin, Bytebase, etc.)
  • Learners wanting to understand complete configuration parameter usage

Main Differences from meta:

  • Enables local software repository building (repo_enabled: true)
  • Enables MinIO storage backup (pgbackrest_method: minio)
  • Preinstalls TimescaleDB, pg_wait_sampling and other additional extensions
  • Includes detailed parameter comments for understanding configuration meanings
  • Preconfigures HA cluster stub configuration (pg-test)

Notes:

  • Some extensions unavailable on ARM64 architecture, adjust as needed
  • Building local software repository requires longer time and larger disk space
  • Default passwords are sample passwords, must be changed for production

8.4 - slim

Minimal installation template without monitoring infrastructure, installs PostgreSQL directly from internet

The slim configuration template provides minimal installation capability, installing a PostgreSQL high-availability cluster directly from the internet without deploying Infra monitoring infrastructure.

When you only need an available database instance without the monitoring system, consider using the Slim Installation mode.


Overview

  • Config Name: slim
  • Node Count: Single node
  • Description: Minimal installation template without monitoring infrastructure, installs PostgreSQL directly
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c slim [-i <primary_ip>]
./slim.yml   # Execute slim installation

Content

Source: pigsty/conf/slim.yml

---
#==============================================================#
# File      :   slim.yml
# Desc      :   Pigsty slim installation config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/slim
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for slim / minimal installation
# No monitoring & infra will be installed, just raw postgresql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c slim
#   ./slim.yml

all:
  children:

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        #10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        #10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        #10.10.10.11: { pg_seq: 2, pg_role: replica } # you can add more!
        #10.10.10.12: { pg_seq: 3, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ vector ]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The slim template is Pigsty’s minimal installation configuration, designed for quick deployment of bare PostgreSQL clusters.

Use Cases:

  • Only need PostgreSQL database, no monitoring system required
  • Resource-limited small servers or edge devices
  • Quick deployment of temporary test databases
  • Already have monitoring system, only need PostgreSQL HA cluster

Key Features:

  • Uses slim.yml playbook instead of deploy.yml for installation
  • Installs software directly from internet, no local software repository
  • Retains core PostgreSQL HA capability (Patroni + etcd + HAProxy)
  • Minimized package downloads, faster installation
  • Default uses PostgreSQL 18

Differences from meta:

  • slim uses dedicated slim.yml playbook, skips Infra module installation
  • Faster installation, less resource usage
  • Suitable for “just need a database” scenarios

Notes:

  • After slim installation, cannot view database status through Grafana
  • If monitoring is needed, use meta or rich template
  • Can add replicas as needed for high availability

8.5 - fat

Feature-All-Test template, single-node installation of all extensions, builds local repo with PG 13-18 all versions

The fat configuration template is Pigsty’s Feature-All-Test template, installing all extension plugins on a single node and building a local software repository containing all extensions for PostgreSQL 13-18 (six major versions).

This is a full-featured configuration for testing and development, suitable for scenarios requiring complete software package cache or testing all extensions.


Overview

  • Config Name: fat
  • Node Count: Single node
  • Description: Feature-All-Test template, installs all extensions, builds local repo with PG 13-18 all versions
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, slim, fat

Usage:

./configure -c fat [-i <primary_ip>]

To specify a particular PostgreSQL version:

./configure -c fat -v 17   # Use PostgreSQL 17

Content

Source: pigsty/conf/fat.yml

---
#==============================================================#
# File      :   fat.yml
# Desc      :   Pigsty Feature-All-Test config template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/fat
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the 4-node sandbox for pigsty
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c fat [-v 18|17|16|15]
#   ./deploy.yml

all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql
    #----------------------------------------------#
    # this is an example single-node postgres cluster with pgvector installed, with one biz database & two biz users
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary } # <---- primary instance with read-write capability
        #x.xx.xx.xx: { pg_seq: 2, pg_role: replica } # <---- read only replica for read-only online traffic
        #x.xx.xx.xy: { pg_seq: 3, pg_role: offline } # <---- offline instance of ETL & interactive queries
      vars:
        pg_cluster: pg-meta

        # install, load, create pg extensions: https://pigsty.io/docs/pgsql/ext/
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'

        # define business users/roles : https://pigsty.io/docs/pgsql/config/user
        pg_users:
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, the password. can be a scram-sha-256 hash string or plain text
            #state: create                   # optional, create|absent, 'create' by default, use 'absent' to drop user
            #login: true                     # optional, can log in, true by default (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create databases? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to the pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin|readonly|readwrite|offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
            # Enhanced roles syntax (PG16+): roles can be string or object with options:
            #   - dbrole_readwrite                       # simple string: GRANT role
            #   - { name: role, admin: true }            # GRANT WITH ADMIN OPTION
            #   - { name: role, set: false }             # PG16: REVOKE SET OPTION
            #   - { name: role, inherit: false }         # PG16: REVOKE INHERIT OPTION
            #   - { name: role, state: absent }          # REVOKE membership
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin] ,comment: admin user for bytebase database   }
          #- {name: dbuser_remove ,state: absent }       # use state: absent to remove a user

        # define business databases : https://pigsty.io/docs/pgsql/config/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among the ansible search path, e.g.: files/)
            schemas: [ pigsty ]             # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - vector                      # install pgvector for vector similarity search
              - postgis                     # install postgis for geospatial type & index
              - timescaledb                 # install timescaledb for time-series data
              - { name: pg_wait_sampling, schema: monitor } # install pg_wait_sampling on monitor schema
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to the pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }

        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # define (OPTIONAL) L2 VIP that bind to primary
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: true # build local repo:  https://pigsty.io/docs/infra/admin/repo
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        repo_packages: [
          node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
          pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
          pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
          pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
          pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
          pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
          pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
          infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
        ]

    #----------------------------------------------#
    # ETCD : https://pigsty.io/docs/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false             # prevent purging running etcd instance?

    #----------------------------------------------#
    # MINIO : https://pigsty.io/docs/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # DOCKER : https://pigsty.io/docs/docker
    # APP    : https://pigsty.io/docs/app
    #----------------------------------------------#
    # OPTIONAL, launch example pgadmin app with: ./app.yml & ./app.yml -e app=bytebase
    app:
      hosts: { 10.10.10.10: {} }
      vars:
        docker_enabled: true                # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: pgadmin                        # specify the default app name to be installed (in the apps)
        apps:                               # define all applications, appname: definition

          # Admin GUI for PostgreSQL, launch with: ./app.yml
          pgadmin:                          # pgadmin app definition (app/pgadmin -> /opt/pgadmin)
            conf:                           # override /opt/pgadmin/.env
              PGADMIN_DEFAULT_EMAIL: [email protected]   # default user name
              PGADMIN_DEFAULT_PASSWORD: pigsty         # default password

          # Schema Migration GUI for PostgreSQL, launch with: ./app.yml -e app=bytebase
          bytebase:
            conf:
              BB_DOMAIN: http://ddl.pigsty  # replace it with your public domain name and postgres database url
              BB_PGURL: "postgresql://dbuser_bytebase:[email protected]:5432/bytebase?sslmode=prefer"


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    certbot_sign: false               # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]     # replace your email address to receive expiration notice
    infra_portal:                     # domain names and upstream servers
      home         : { domain: i.pigsty }
      pgadmin      : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase     : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" ,websocket: true}
      minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      #website:   # static local website example stub
      #  domain: repo.pigsty              # external domain name for static site
      #  certbot: repo.pigsty             # use certbot to sign https certificate for this static site
      #  path: /www/pigsty                # path to the static site directory

      #supabase:  # dynamic upstream service example stub
      #  domain: supa.pigsty          # external domain name for upstream service
      #  certbot: supa.pigsty         # use certbot to sign https certificate for this upstream server
      #  endpoint: "10.10.10.10:8000" # path to the static site directory
      #  websocket: true              # add websocket support
      #  certbot: supa.pigsty         # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: true              # overwrite node hostname on multi-node template
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty
      - 10.10.10.10 adm.pigsty ddl.pigsty repo.pigsty supa.pigsty
    node_repo_modules: local,node,infra,pgsql # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # default postgres version
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                 # prevent purging running postgres instance?
    pg_packages: [ pgsql-main, pgsql-common ] # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # BACKUP : https://pigsty.io/docs/pgsql/backup
    #----------------------------------------------#
    # if you want to use minio as backup repo instead of 'local' fs, uncomment this, and configure `pgbackrest_repo`
    # you can also use external object storage as backup repo
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest [CHANGE ACCORDING to minio_users.pgbackrest]
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days
      s3:                             # you can use cloud object storage as backup repo
        type: s3                      # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The fat template is Pigsty’s full-featured test configuration, designed for completeness testing and offline package building.

Key Features:

  • All Extensions: Installs all categorized extension packages for PostgreSQL 18
  • Multi-version Repository: Local repo contains all six major versions of PostgreSQL 13-18
  • Complete Component Stack: Includes MinIO backup, Docker applications, VIP, etc.
  • Enterprise Components: Includes Kafka, PolarDB, IvorySQL, TigerBeetle, etc.

Repository Contents:

CategoryDescription
PostgreSQL 13-18Six major versions’ kernels and all extensions
Extension Categoriestime, gis, rag, fts, olap, feat, lang, type, util, func, admin, stat, sec, fdw, sim, etl
Enterprise ComponentsKafka, Java Runtime, Sealos, TigerBeetle
Database KernelsPolarDB, IvorySQL

Differences from rich:

  • fat contains all six versions of PostgreSQL 13-18, rich only contains current default version
  • fat contains additional enterprise components (Kafka, PolarDB, IvorySQL, etc.)
  • fat requires larger disk space and longer build time

Use Cases:

  • Pigsty development testing and feature validation
  • Building complete multi-version offline software packages
  • Testing all extension compatibility scenarios
  • Enterprise environments pre-caching all software packages

Notes:

  • Requires large disk space (100GB+ recommended) for storing all packages
  • Building local software repository requires longer time
  • Some extensions unavailable on ARM64 architecture
  • Default passwords are sample passwords, must be changed for production

8.6 - infra

Only installs observability infrastructure, dedicated template without PostgreSQL and etcd

The infra configuration template only deploys Pigsty’s observability infrastructure components (VictoriaMetrics/Grafana/Loki/Nginx, etc.), without PostgreSQL and etcd.

Suitable for scenarios requiring a standalone monitoring stack, such as monitoring external PostgreSQL/RDS instances or other data sources.


Overview

  • Config Name: infra
  • Node Count: Single or multiple nodes
  • Description: Only installs observability infrastructure, without PostgreSQL and etcd
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c infra [-i <primary_ip>]
./infra.yml    # Only execute infra playbook

Content

Source: pigsty/conf/infra.yml

---
#==============================================================#
# File      :   infra.yml
# Desc      :   Infra Only Config
# Ctime     :   2025-12-16
# Mtime     :   2025-12-30
# Docs      :   https://doc.pgsty.com/infra
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for deploy victoria stack alone
# tutorial: https://doc.pgsty.com/infra
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c infra
#   ./infra.yml

all:
  children:
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
        #10.10.10.11: { infra_seq: 2 } # you can add more nodes if you want
        #10.10.10.12: { infra_seq: 3 } # don't forget to assign unique infra_seq for each node
      vars:
        docker_enabled: true            # enabled docker with ./docker.yml
        docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        pg_exporters:     # bin/pgmon-add pg-rds
          20001: { pg_cluster: pg-rds ,pg_seq: 1 ,pg_host: 10.10.10.10 ,pg_exporter_url: 'postgres://postgres:[email protected]:5432/postgres' }

  vars:                                 # global variables
    version: v4.0.0                     # pigsty version string
    admin_ip: 10.10.10.10               # admin node ip address
    region: default                     # upstream mirror region: default,china,europe
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    infra_portal:                       # infra services exposed via portal
      home : { domain: i.pigsty }       # default domain name
    repo_enabled: false                 # online installation without repo
    node_repo_modules: node,infra,pgsql # add these repos directly
    #haproxy_enabled: false              # enable haproxy on infra node?
    #vector_enabled: false               # enable vector on infra node?

    # DON't FORGET TO CHANGE DEFAULT PASSWORDS!
    grafana_admin_password: pigsty
...

Explanation

The infra template is Pigsty’s pure monitoring stack configuration, designed for standalone deployment of observability infrastructure.

Use Cases:

  • Monitoring external PostgreSQL instances (RDS, self-hosted, etc.)
  • Need standalone monitoring/alerting platform
  • Already have PostgreSQL clusters, only need to add monitoring
  • As a central console for multi-cluster monitoring

Included Components:

  • VictoriaMetrics: Time series database for storing metrics
  • VictoriaLogs: Log aggregation system
  • VictoriaTraces: Distributed tracing system
  • Grafana: Visualization dashboards
  • Alertmanager: Alert management
  • Nginx: Reverse proxy and web entry

Not Included:

  • PostgreSQL database cluster
  • etcd distributed coordination service
  • MinIO object storage

Monitoring External Instances: After configuration, add monitoring for external PostgreSQL instances via the pgsql-monitor.yml playbook:

pg_exporters:
  20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.100 }
  20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.101 }

Notes:

  • This template will not install any databases
  • For full functionality, use meta or rich template
  • Can add multiple infra nodes for high availability as needed

8.7 - Kernel Templates

8.8 - pgsql

Native PostgreSQL kernel, supports deployment of PostgreSQL versions 13 to 18

The pgsql configuration template uses the native PostgreSQL kernel, which is Pigsty’s default database kernel, supporting PostgreSQL versions 13 to 18.


Overview

  • Config Name: pgsql
  • Node Count: Single node
  • Description: Native PostgreSQL kernel configuration template
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c pgsql [-i <primary_ip>]

To specify a particular PostgreSQL version (e.g., 17):

./configure -c pgsql -v 17

Content

Source: pigsty/conf/pgsql.yml

---
#==============================================================#
# File      :   pgsql.yml
# Desc      :   1-node PostgreSQL Config template
# Ctime     :   2025-02-23
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/pgsql
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for basical PostgreSQL Kernel.
# Nothing special, just a basic setup with one node.
# tutorial: https://pigsty.io/docs/pgsql/kernel/postgres
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgsql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PostgreSQL Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ postgis, timescaledb, vector ]}
        pg_extensions: [ postgis, timescaledb, pgvector, pg_wait_sampling ]
        pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

  vars:
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The pgsql template is Pigsty’s standard kernel configuration, using community-native PostgreSQL.

Version Support:

  • PostgreSQL 18 (default)
  • PostgreSQL 17, 16, 15, 14, 13

Use Cases:

  • Need to use the latest PostgreSQL features
  • Need the widest extension support
  • Standard production environment deployment
  • Same functionality as meta template, explicitly declaring native kernel usage

Differences from meta:

  • pgsql template explicitly declares using native PostgreSQL kernel
  • Suitable for scenarios needing clear distinction between different kernel types

8.9 - code

AI coding sandbox with Code-Server, Jupyter, JuiceFS and PostgreSQL

The code template provides a ready-to-use AI coding sandbox, integrating Code-Server (Web VS Code), Jupyter Lab, JuiceFS distributed filesystem, and a feature-rich PostgreSQL database.


Overview

  • Config Name: code
  • Node Count: Single node
  • Description: AI coding sandbox with Web IDE + Jupyter + JuiceFS + PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c code [-i <primary_ip>]

Content

Source: pigsty/conf/code.yml

---
#==============================================================#
# File      :   code.yml
# Desc      :   Pigsty ai vibe coding sandbox
# Ctime     :   2026-01-19
# Mtime     :   2026-01-22
# Docs      :   https://pigsty.io/docs/conf/vibe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# VIBE CODING SANDBOX
# PostgreSQL with related extensions
# Coding Agent, Code-Server, Jupyter
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c code
#   ./deploy.yml
#   ./juice.yml     # pgfs: juicefs on pgsql
#   ./code.yml      # code-server

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    pgsql: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } } ,vars: { pg_cluster: pgsql }}

    # optional modules
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}
    #redis-ms:
    #  hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
    #  vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

  vars:
    #----------------------------------------------#
    # INFRA: https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                     # pigsty version string
    admin_ip: 10.10.10.10               # admin node ip address
    region: default                     # upstream mirror region: default,china,europe
    infra_portal:                       # infra services exposed via portal
      home : { domain: i.pigsty }       # default domain name
    dns_enabled: false                # disable dns service
    blackbox_enabled: false             # disable blackbox exporter
    alertmanager_enabled: false         # disable alertmanager
    vtrace_enabled: false               # enable vtrace extension
    infra_extra_services:               # home page navigation entries
      - { name: Code Server  ,url: '/code'    ,desc: 'VS Code Server'   ,icon: 'code'    }

    #----------------------------------------------#
    # NODE: https://pigsty.io/docs/node
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    node_dns_method: none               # do not setup dns
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_packages: [ openssh-server, juicefs, restic, rclone, uv, opencode, claude, code-server, golang, nodejs, asciinema, genai-toolbox, postgrest ]
    docker_enabled: true                # enable docker service
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    #----------------------------------------------#
    # PGSQL: https://pigsty.io/docs/pgsql
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, patroni, pgbackrest, pg_exporter, pgbackrest_exporter ]
    pg_extensions:
      - postgis timescaledb pg_cron pgvector vchord pgvectorscale pg_search pg_textsearch vchord_bm25
      - pg_duckdb pg_mooncake pg_clickhouse pg_parquet pg_tle pljs plprql pg_stat_monitor pg_wait_sampling
      - pg_ddlx pglinter pg_permissions safeupdate pg_dirtyread
      - pg_anon pgsmcrypto credcheck pg_vault pgsodium pg_session_jwt documentdb
      - pgsql-feat pgsql-type pgsql-util pgsql-func pgsql-fdw
    pg_users:
      - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
      - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
    pg_databases:
      - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ postgis, timescaledb, vector ]}
    pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'
    pg_hba_rules:
      - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
      - { user: all ,db: all ,addr: world ,auth: pwd ,title: 'everyone world access with password'    ,order: 900 }
    pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am
    patroni_mode: remove                # remove patroni after deployment
    pgbouncer_enabled: false            # disable pgbouncer pool
    pgbouncer_exporter_enabled: false   # disable pgbouncer_exporter on pgsql hosts?
    pgbackrest_exporter_enabled: false  # disable pgbackrest_exporter
    pg_default_services: []             # do not provision pg services
    #pg_reload: false                   # do not reload patroni/service

    #----------------------------------------------#
    # JUICE : https://pigsty.io/docs/juice
    #----------------------------------------------#
    juice_instances:  # dict of juicefs filesystems to deploy
      jfs:
        path  : /fs
        meta  : postgres://dbuser_meta:[email protected]:5432/meta
        data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
        port  : 9567

    #----------------------------------------------#
    # VIBE : https://pigsty.io/docs/pilot/code
    #----------------------------------------------#
    # CHANAGE PASSWORD!
    code_enabled: true
    code_password: Code.Server
    jupyter_enabled: true
    jupyter_password: Jupyter.Lab
    node_pip_packages: jupyter

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
...

Explanation

The code template is an AI-era web coding sandbox, enabling development, data analysis, and AI application building directly in browser.

Core Components:

ComponentDescriptionAccess
Code-ServerWeb-based VS Code, full-featured editorhttp://<ip>/code
Jupyter LabInteractive data science notebookhttp://<ip>:8888
JuiceFSPostgreSQL-backed distributed filesystemMount at /fs
PostgreSQL 18Feature-rich database with vector/timeseries/FTS extensionsPort 5432

Pre-installed Tools:

  • AI Assistants: opencode, claude CLI coding tools
  • Runtimes: golang, nodejs, uv (Python package manager)
  • Data Tools: postgrest (auto REST API), genai-toolbox
  • Utilities: restic, rclone (backup/sync), asciinema (terminal recording)

PostgreSQL Extensions:

Pre-installed extensions covering AI/vector, timeseries, FTS, analytics:

# Vector & AI
pgvector, vchord, pgvectorscale, pg_search, vchord_bm25

# Timeseries & GIS
timescaledb, postgis, pg_cron

# Analytics & Lakehouse
pg_duckdb, pg_mooncake, pg_clickhouse, pg_parquet

# Security & Audit
pg_anon, pgsmcrypto, credcheck, pg_vault, pgsodium

# Development
pg_tle, pljs, plprql, documentdb

JuiceFS Filesystem

This template uses JuiceFS for distributed filesystem capability, with a unique feature: both metadata and data stored in PostgreSQL.

Architecture:

  • Metadata Engine: PostgreSQL stores filesystem metadata
  • Data Storage: PostgreSQL Large Objects store file data
  • Mount Point: Default /fs directory
  • Metrics Port: 9567 for Prometheus scraping

Use Cases:

  • Persistent storage for code projects
  • Jupyter Notebook working directory
  • AI model and dataset storage
  • File sharing across instances (when scaling to multiple nodes)

Configuration:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

Deployment

# 1. Download Pigsty
curl https://repo.pigsty.io/get | bash

# 2. Use code template
./configure -c code

# 3. Change passwords (important!)
vi pigsty.yml
# Modify code_password, jupyter_password, etc.

# 4. Deploy infra and PostgreSQL
./deploy.yml

# 5. Deploy JuiceFS filesystem
./juice.yml

# 6. Deploy Code-Server and Jupyter
./code.yml

Access

After deployment, access via browser:

# Code-Server (VS Code Web)
http://<ip>/code
# Password: Code.Server (change it!)

# Jupyter Lab
http://<ip>:8888
# Password: Jupyter.Lab (change it!)

# Grafana Monitoring
http://<ip>:3000
# Username: admin, Password: pigsty

# PostgreSQL
psql postgres://dbuser_meta:DBUser.Meta@<ip>:5432/meta

Use Cases

  • AI App Development: Build RAG, Agent, LLM applications
  • Data Science: Jupyter-based data analysis and visualization
  • Remote Development: Cloud-based Web IDE environment
  • Education: Consistent dev environment for students
  • Rapid Prototyping: Quick idea validation without local setup

Notes

  • Change Passwords: Default code_password and jupyter_password are for testing only
  • Network Security: This template opens world access (addr: world), configure firewall or VPN for production
  • Resources: Recommend 2+ cores, 4GB+ RAM, SSD storage
  • Simplified Architecture: Patroni, PgBouncer disabled for single-node dev environment

8.10 - vibe

VIBE AI coding sandbox config template, integrating Code-Server, JupyterLab, Claude Code and JuiceFS web development environment

The vibe config template provides a ready-to-use AI coding sandbox, integrating Code-Server (Web VS Code), JupyterLab, Claude Code CLI, JuiceFS distributed filesystem, and feature-rich PostgreSQL database.


Overview

  • Config Name: vibe
  • Node Count: Single node
  • Description: VIBE AI coding sandbox with Code-Server + JupyterLab + Claude Code + JuiceFS + PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c vibe [-i <primary_ip>]

Content

Source: pigsty/conf/vibe.yml

---
#==============================================================#
# File      :   vibe.yml
# Desc      :   Pigsty ai vibe coding sandbox
# Ctime     :   2026-01-19
# Mtime     :   2026-01-24
# Docs      :   https://pigsty.io/docs/conf/vibe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# VIBE CODING SANDBOX
# PostgreSQL with related extensions
# Code-Server, Jupyter, Claude Code
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c vibe
#   ./deploy.yml
#   ./juice.yml     # pgfs: juicefs on pgsql, mount on /fs
#   ./vibe.yml      # code-server, jupyter, and claude-code

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    pgsql: { hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } } ,vars: { pg_cluster: pgsql }}

    # optional modules
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}
    #redis-ms:
    #  hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
    #  vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

  vars:
    #----------------------------------------------#
    # INFRA: https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                     # pigsty version string
    admin_ip: 10.10.10.10               # admin node ip address
    region: default                     # upstream mirror region: default,china,europe
    infra_portal:                       # infra services exposed via portal
      home : { domain: i.pigsty }       # default domain name
    dns_enabled: false                # disable dns service
    vtrace_enabled: false               # enable vtrace extension
    #blackbox_enabled: false             # disable blackbox exporter
    #alertmanager_enabled: false         # disable alertmanager
    infra_extra_services:               # home page navigation entries
      - { name: Code Server  ,url: '/code'             ,desc: 'VS Code Server'       ,icon: 'code'     }
      - { name: Jupyter      ,url: '/jupyter'          ,desc: 'Jupyter Notebook'     ,icon: 'jupyter'  }
      - { name: Claude Code  ,url: '/ui/d/claude-code' ,desc: 'Claude Observability' ,icon: 'claude'   }

    #----------------------------------------------#
    # NODE: https://pigsty.io/docs/node
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit
    node_dns_method: none               # do not setup dns
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_packages: [ openssh-server, juicefs, restic, rclone, uv, opencode, claude, code-server, golang, nodejs, asciinema, genai-toolbox, postgrest ]
    docker_enabled: true                # enable docker service
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    #----------------------------------------------#
    # PGSQL: https://pigsty.io/docs/pgsql
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, patroni, pgbackrest, pg_exporter, pgbackrest_exporter ]
    pg_extensions:
      - postgis timescaledb pg_cron pgvector vchord pgvectorscale pg_search pg_textsearch vchord_bm25
      - pg_duckdb pg_mooncake pg_clickhouse pg_parquet pg_tle pljs plprql pg_stat_monitor pg_wait_sampling
      - pg_ddlx pglinter pg_permissions safeupdate pg_dirtyread
      - pg_anon pgsmcrypto credcheck pg_vault pgsodium pg_session_jwt documentdb
      - pgsql-feat pgsql-type pgsql-util pgsql-func pgsql-fdw
    pg_users:
      - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
      - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
    pg_databases:
      - { name: meta, baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [ postgis, timescaledb, vector ]}
    pg_libs: 'timescaledb, pg_stat_statements, auto_explain, pg_wait_sampling'
    pg_hba_rules:
      - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
      - { user: all ,db: all ,addr: world ,auth: pwd ,title: 'everyone world access with password'    ,order: 900 }
    pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am
    patroni_mode: remove                # remove patroni after deployment
    pgbouncer_enabled: false            # disable pgbouncer pool
    pgbouncer_exporter_enabled: false   # disable pgbouncer_exporter on pgsql hosts?
    pgbackrest_exporter_enabled: false  # disable pgbackrest_exporter
    pg_default_services: []             # do not provision pg services
    #pg_reload: false                   # do not reload patroni/service

    #----------------------------------------------#
    # JUICE : https://pigsty.io/docs/juice
    #----------------------------------------------#
    juice_instances:  # dict of juicefs filesystems to deploy
      jfs:
        path  : /fs
        meta  : postgres://dbuser_meta:[email protected]:5432/meta
        data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
        port  : 9567

    #----------------------------------------------#
    # VIBE : https://pigsty.io/docs/vibe
    #----------------------------------------------#
    # CHANAGE PASSWORD!
    code_password: Code.Server
    jupyter_password: Jupyter.Lab
    #claude_env:   # you can use other models here!
    #  ANTHROPIC_BASE_URL: https://open.bigmodel.cn/api/anthropic
    #  ANTHROPIC_API_URL: https://open.bigmodel.cn/api/anthropic
    #  ANTHROPIC_AUTH_TOKEN: your_api_service_token
    #  ANTHROPIC_MODEL: glm-4.7
    #  ANTHROPIC_SMALL_FAST_MODEL: glm-4.5-air

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
...

Explanation

The vibe template is an AI-era Web coding sandbox, enabling development, data analysis, AI app building all in browser.

Core Components:

ComponentDescriptionAccess Method
Code-ServerWeb version of VS Code, full-featured code editorhttp://<ip>/code
JupyterLabInteractive data science notebook, Python/SQLhttp://<ip>/jupyter
Claude CodeAI coding assistant CLI with OpenTelemetry observabilityTerminal claude command
JuiceFSPostgreSQL-based distributed filesystemMount point /fs
PostgreSQL 18Feature-rich database with vector/timeseries/fulltext extensionsPort 5432

Pre-installed Dev Tools:

  • AI Assistants: claude (Claude Code CLI), opencode (CLI AI coding tool)
  • Language Runtimes: golang, nodejs, uv (Python package manager)
  • Data Tools: postgrest (auto REST API), genai-toolbox
  • Utilities: restic, rclone (backup sync), asciinema (terminal recording)

PostgreSQL Extensions:

This template pre-installs rich PostgreSQL extensions covering AI/vector, timeseries, fulltext search, analytics:

# Vector & AI
pgvector, vchord, pgvectorscale, pg_search, pg_textsearch, vchord_bm25

# Timeseries & Geo
timescaledb, postgis, pg_cron

# Analytics & Lakehouse
pg_duckdb, pg_mooncake, pg_clickhouse, pg_parquet

# Security & Audit
pg_anon, pgsmcrypto, credcheck, pg_vault, pgsodium, pg_session_jwt

# Development
pg_tle, pljs, plprql, documentdb, pglinter

VIBE Module Components

VIBE module is new in v4.0.0, an AI coding sandbox module with three core components:

Code-Server: VS Code in browser

  • Full VS Code functionality, extension support
  • HTTPS access via Nginx reverse proxy
  • Supports Open VSX and Microsoft extension marketplaces
  • Related params: code_enabled, code_port, code_data, code_password, code_gallery

JupyterLab: Interactive computing environment

  • Python/SQL/Markdown notebook support
  • Pre-configured Python venv with data science libraries
  • HTTPS access via Nginx reverse proxy
  • Related params: jupyter_enabled, jupyter_port, jupyter_data, jupyter_password, jupyter_venv

Claude Code: AI coding assistant

  • Configure Claude Code CLI, skip initial onboarding
  • Built-in OpenTelemetry config, sends metrics/logs to Victoria stack
  • Provides claude-code dashboard for usage monitoring
  • Related params: claude_enabled, claude_env

JuiceFS Filesystem

This template uses JuiceFS for distributed filesystem capability, with a special feature: both metadata and data stored in PostgreSQL.

Architecture Features:

  • Metadata Engine: Uses PostgreSQL for filesystem metadata storage
  • Data Storage: Uses PostgreSQL Large Object for file data storage
  • Mount Point: Default mount at /fs (controlled by vibe_data param)
  • Monitoring Port: 9567 provides Prometheus metrics

Use Cases:

  • Persistent storage for code projects
  • Working directory for Jupyter Notebooks
  • Storage for AI models and datasets
  • File sharing across instances (when scaled to multiple nodes)

Config Example:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

Deployment Steps

# 1. Download Pigsty
curl https://repo.pigsty.io/get | bash

# 2. Use vibe config template
./configure -c vibe

# 3. Modify passwords (important!)
vi pigsty.yml
# Change code_password, jupyter_password, etc.

# 4. Deploy infrastructure and PostgreSQL
./deploy.yml

# 5. Deploy JuiceFS filesystem
./juice.yml

# 6. Deploy VIBE module (Code-Server, JupyterLab, Claude Code)
./vibe.yml

Access Methods

After deployment, access via browser:

# Code-Server (VS Code Web)
http://<ip>/code
# Password: Code.Server (please change)

# JupyterLab
http://<ip>/jupyter
# Password: Jupyter.Lab (please change)

# Claude Code Dashboard
http://<ip>:3000/d/claude-code
# Grafana default: admin / pigsty

# PostgreSQL
psql postgres://dbuser_meta:DBUser.Meta@<ip>:5432/meta

Use Cases

  • AI App Development: Build RAG, Agent, LLM applications
  • Data Science: Use JupyterLab for data analysis and visualization
  • Remote Development: Setup Web IDE environment on cloud servers
  • Teaching Demos: Provide consistent dev environment for students
  • Rapid Prototyping: Quickly validate ideas without local env setup
  • Claude Code Observability: Monitor AI coding assistant usage

Notes

  • Must change passwords: code_password and jupyter_password defaults are for testing only
  • Network security: This template opens world access (addr: world), production should configure firewall or VPN
  • Resource requirements: Recommend at least 2 cores 4GB memory, SSD disk
  • Simplified architecture: This template disables Patroni, PgBouncer etc HA components, suitable for single-node dev env
  • Claude API: Using Claude Code requires configuring API key in claude_env

8.11 - mssql

WiltonDB / Babelfish kernel, provides Microsoft SQL Server protocol and syntax compatibility

The mssql configuration template uses WiltonDB / Babelfish database kernel instead of native PostgreSQL, providing Microsoft SQL Server wire protocol (TDS) and T-SQL syntax compatibility.

For the complete tutorial, see: Babelfish (MSSQL) Kernel Guide


Overview

  • Config Name: mssql
  • Node Count: Single node
  • Description: WiltonDB / Babelfish configuration template, provides SQL Server protocol compatibility
  • OS Distro: el8, el9, el10, u22, u24 (Debian not available)
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c mssql [-i <primary_ip>]

Content

Source: pigsty/conf/mssql.yml

---
#==============================================================#
# File      :   mssql.yml
# Desc      :   Babelfish: WiltonDB (MSSQL Compatible) template
# Ctime     :   2020-08-01
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/mssql
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Babelfish Kernel (WiltonDB),
# Which is a PostgreSQL 15 fork with SQL Server Compatibility
# tutorial: https://pigsty.io/docs/pgsql/kernel/babelfish
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mssql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Babelfish Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_mssql ,password: DBUser.MSSQL ,superuser: true, pgbouncer: true ,roles: [dbrole_admin], comment: superuser & owner for babelfish  }
        pg_databases:
          - name: mssql
            baseline: mssql.sql
            extensions: [uuid-ossp, babelfishpg_common, babelfishpg_tsql, babelfishpg_tds, babelfishpg_money, pg_hint_plan, system_stats, tds_fdw]
            owner: dbuser_mssql
            parameters: { 'babelfishpg_tsql.migration_mode' : 'multi-db' }
            comment: babelfish cluster, a MSSQL compatible pg cluster
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # Babelfish / WiltonDB Ad Hoc Settings
        pg_mode: mssql                     # Microsoft SQL Server Compatible Mode
        pg_version: 15
        pg_packages: [ wiltondb, pgsql-common, sqlcmd ]
        pg_libs: 'babelfishpg_tds, pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: dbuser_mssql ,db: mssql ,addr: intra ,auth: md5 ,title: 'allow mssql dbsu intranet access'      ,order: 525 } # <--- use md5 auth method for mssql user
          - { user: all          ,db: all   ,addr: intra ,auth: md5 ,title: 'everyone intranet access with md5 pwd' ,order: 800 }
        pg_default_services: # route primary & replica service to mssql port 1433
          - { name: primary ,port: 5433 ,dest: 1433  ,check: /primary   ,selector: "[]" }
          - { name: replica ,port: 5434 ,dest: 1433  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
          - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
          - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]" }

  vars:
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false                 # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,mssql # extra mssql repo is required
    node_tune: oltp                           # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 15                            # Babelfish kernel is compatible with postgres 15
    pg_conf: oltp.yml                         # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The mssql template allows you to use SQL Server Management Studio (SSMS) or other SQL Server client tools to connect to PostgreSQL.

Key Features:

  • Uses TDS protocol (port 1433), compatible with SQL Server clients
  • Supports T-SQL syntax, low migration cost
  • Retains PostgreSQL’s ACID properties and extension ecosystem
  • Supports multi-db and single-db migration modes

Connection Methods:

# Using sqlcmd command line tool
sqlcmd -S 10.10.10.10,1433 -U dbuser_mssql -P DBUser.MSSQL -d mssql

# Using SSMS or Azure Data Studio
# Server: 10.10.10.10,1433
# Authentication: SQL Server Authentication
# Login: dbuser_mssql
# Password: DBUser.MSSQL

Use Cases:

  • Migrating from SQL Server to PostgreSQL
  • Applications needing to support both SQL Server and PostgreSQL clients
  • Leveraging PostgreSQL ecosystem while maintaining T-SQL compatibility

Notes:

  • WiltonDB is based on PostgreSQL 15, does not support higher version features
  • Some T-SQL syntax may have compatibility differences, refer to Babelfish compatibility documentation
  • Must use md5 authentication method (not scram-sha-256)

8.12 - polar

PolarDB for PostgreSQL kernel, provides Aurora-style storage-compute separation capability

The polar configuration template uses Alibaba Cloud’s PolarDB for PostgreSQL database kernel instead of native PostgreSQL, providing “cloud-native” Aurora-style storage-compute separation capability.

For the complete tutorial, see: PolarDB for PostgreSQL (POLAR) Kernel Guide


Overview

  • Config Name: polar
  • Node Count: Single node
  • Description: Uses PolarDB for PostgreSQL kernel
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c polar [-i <primary_ip>]

Content

Source: pigsty/conf/polar.yml

---
#==============================================================#
# File      :   polar.yml
# Desc      :   Pigsty 1-node PolarDB Kernel Config Template
# Ctime     :   2020-08-05
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/polar
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for PolarDB PG Kernel,
# Which is a PostgreSQL 15 fork with RAC flavor features
# tutorial: https://pigsty.io/docs/pgsql/kernel/polardb
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c polar
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # PolarDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # PolarDB Ad Hoc Settings
        pg_version: 15                            # PolarDB PG is based on PG 15
        pg_mode: polar                            # PolarDB PG Compatible mode
        pg_packages: [ polardb, pgsql-common ]    # Replace PG kernel with PolarDB kernel
        pg_exporter_exclude_database: 'template0,template1,postgres,polardb_admin'
        pg_default_roles:                         # PolarDB require replicator as superuser
          - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
          - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
          - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
          - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
          - { name: postgres     ,superuser: true  ,comment: system superuser }
          - { name: replicator   ,superuser: true  ,replication: true ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator } # <- superuser is required for replication
          - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
          - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 15                      # PolarDB is compatible with PG 15
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

Explanation

The polar template uses Alibaba Cloud’s open-source PolarDB for PostgreSQL kernel, providing cloud-native database capabilities.

Key Features:

  • Storage-compute separation architecture, compute and storage nodes can scale independently
  • Supports one-write-multiple-read, read replicas scale in seconds
  • Compatible with PostgreSQL ecosystem, maintains SQL compatibility
  • Supports shared storage scenarios, suitable for cloud environment deployment

Use Cases:

  • Cloud-native scenarios requiring storage-compute separation architecture
  • Read-heavy write-light workloads
  • Scenarios requiring quick scaling of read replicas
  • Test environments for evaluating PolarDB features

Notes:

  • PolarDB is based on PostgreSQL 15, does not support higher version features
  • Replication user requires superuser privileges (different from native PostgreSQL)
  • Some PostgreSQL extensions may have compatibility issues
  • ARM64 architecture not supported

8.13 - ivory

IvorySQL kernel, provides Oracle syntax and PL/SQL compatibility

The ivory configuration template uses Highgo’s IvorySQL database kernel instead of native PostgreSQL, providing Oracle syntax and PL/SQL compatibility.

For the complete tutorial, see: IvorySQL (Oracle Compatible) Kernel Guide


Overview

  • Config Name: ivory
  • Node Count: Single node
  • Description: Uses IvorySQL Oracle-compatible kernel
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c ivory [-i <primary_ip>]

Content

Source: pigsty/conf/ivory.yml

---
#==============================================================#
# File      :   ivory.yml
# Desc      :   IvorySQL 4 (Oracle Compatible) template
# Ctime     :   2024-08-05
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/ivory
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for IvorySQL 5 Kernel,
# Which is a PostgreSQL 18 fork with Oracle Compatibility
# tutorial: https://pigsty.io/docs/pgsql/kernel/ivorysql
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c ivory
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # IvorySQL Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # IvorySQL Ad Hoc Settings
        pg_mode: ivory                                                 # Use IvorySQL Oracle Compatible Mode
        pg_packages: [ ivorysql, pgsql-common ]                        # install IvorySQL instead of postgresql kernel
        pg_libs: 'liboracle_parser, pg_stat_statements, auto_explain'  # pre-load oracle parser

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # IvorySQL kernel is compatible with postgres 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ivory template uses Highgo’s open-source IvorySQL kernel, providing Oracle database compatibility.

Key Features:

  • Supports Oracle PL/SQL syntax
  • Compatible with Oracle data types (NUMBER, VARCHAR2, etc.)
  • Supports Oracle-style packages
  • Retains all standard PostgreSQL functionality

Use Cases:

  • Migrating from Oracle to PostgreSQL
  • Applications needing both Oracle and PostgreSQL syntax support
  • Leveraging PostgreSQL ecosystem while maintaining PL/SQL compatibility
  • Test environments for evaluating IvorySQL features

Notes:

  • IvorySQL 5 is based on PostgreSQL 18
  • Using liboracle_parser requires loading into shared_preload_libraries
  • pgbackrest may have checksum issues in Oracle-compatible mode, PITR capability is limited
  • Primarily supports EL8/EL9 systems, refer to official docs for other OS support

8.14 - mysql

OpenHalo kernel, provides MySQL protocol and syntax compatibility

The mysql configuration template uses OpenHalo database kernel instead of native PostgreSQL, providing MySQL wire protocol and SQL syntax compatibility.


Overview

  • Config Name: mysql
  • Node Count: Single node
  • Description: OpenHalo MySQL-compatible kernel configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c mysql [-i <primary_ip>]

Content

Source: pigsty/conf/mysql.yml

---
#==============================================================#
# File      :   mysql.yml
# Desc      :   1-node OpenHaloDB (MySQL Compatible) template
# Ctime     :   2025-04-03
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/mysql
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OpenHalo PG Kernel,
# Which is a PostgreSQL 14 fork with MySQL Wire Compatibility
# tutorial: https://pigsty.io/docs/pgsql/kernel/openhalo
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c mysql
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OpenHalo Database Cluster
    #----------------------------------------------#
    # connect with mysql client: mysql -h 10.10.10.10 -u dbuser_meta -D mysql (the actual database is 'postgres', and 'mysql' is a schema)
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: postgres, extensions: [aux_mysql]} # the mysql compatible database
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # OpenHalo Ad Hoc Setting
        pg_mode: mysql                    # MySQL Compatible Mode by HaloDB
        pg_version: 14                    # The current HaloDB is compatible with PG Major Version 14
        pg_packages: [ openhalodb, pgsql-common ]  # install openhalodb instead of postgresql kernel

  vars:
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 14                      # OpenHalo is compatible with PG 14
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The mysql template uses the OpenHalo kernel, allowing you to connect to PostgreSQL using MySQL client tools.

Key Features:

  • Uses MySQL protocol (port 3306), compatible with MySQL clients
  • Supports a subset of MySQL SQL syntax
  • Retains PostgreSQL’s ACID properties and storage engine
  • Supports both PostgreSQL and MySQL protocol connections simultaneously

Connection Methods:

# Using MySQL client
mysql -h 10.10.10.10 -P 3306 -u dbuser_meta -pDBUser.Meta

# Also retains PostgreSQL connection capability
psql postgres://dbuser_meta:[email protected]:5432/meta

Use Cases:

  • Migrating from MySQL to PostgreSQL
  • Applications needing to support both MySQL and PostgreSQL clients
  • Leveraging PostgreSQL ecosystem while maintaining MySQL compatibility

Notes:

  • OpenHalo is based on PostgreSQL 14, does not support higher version features
  • Some MySQL syntax may have compatibility differences
  • Only supports EL8/EL9 systems
  • ARM64 architecture not supported

8.15 - pgtde

Percona PostgreSQL kernel, provides Transparent Data Encryption (pg_tde) capability

The pgtde configuration template uses Percona PostgreSQL database kernel, providing Transparent Data Encryption (TDE) capability.


Overview

  • Config Name: pgtde
  • Node Count: Single node
  • Description: Percona PostgreSQL transparent data encryption configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c pgtde [-i <primary_ip>]

Content

Source: pigsty/conf/pgtde.yml

---
#==============================================================#
# File      :   pgtde.yml
# Desc      :   PG TDE with Percona PostgreSQL 1-node template
# Ctime     :   2025-07-04
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/pgtde
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Percona PostgreSQL Distribution
# With pg_tde extension, which is compatible with PostgreSQL 18.1
# tutorial: https://pigsty.io/docs/pgsql/kernel/percona
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c pgtde
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # Percona Postgres Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer  }
        pg_databases:
          - name: meta
            baseline: cmdb.sql
            comment: pigsty tde database
            schemas: [pigsty]
            extensions: [ vector, postgis, pg_tde ,pgaudit, { name: pg_stat_monitor, schema: monitor } ]
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # Percona PostgreSQL TDE Ad Hoc Settings
        pg_packages: [ percona-main, pgsql-common ]  # install percona postgres packages
        pg_libs: 'pg_tde, pgaudit, pg_stat_statements, pg_stat_monitor, auto_explain'

  vars:
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql,percona
    node_tune: oltp

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default Percona TDE PG Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The pgtde template uses Percona PostgreSQL kernel, providing enterprise-grade transparent data encryption capability.

Key Features:

  • Transparent Data Encryption: Data automatically encrypted on disk, transparent to applications
  • Key Management: Supports local keys and external Key Management Systems (KMS)
  • Table-level Encryption: Selectively encrypt sensitive tables
  • Full Compatibility: Fully compatible with native PostgreSQL

Use Cases:

  • Meeting data security compliance requirements (e.g., PCI-DSS, HIPAA)
  • Storing sensitive data (e.g., personal information, financial data)
  • Scenarios requiring data-at-rest encryption
  • Enterprise environments with strict data security requirements

Usage:

-- Create encrypted table
CREATE TABLE sensitive_data (
    id SERIAL PRIMARY KEY,
    ssn VARCHAR(11)
) USING pg_tde;

-- Or enable encryption on existing table
ALTER TABLE existing_table SET ACCESS METHOD pg_tde;

Notes:

  • Percona PostgreSQL is based on PostgreSQL 18
  • Encryption brings some performance overhead (typically 5-15%)
  • Encryption keys must be properly managed
  • ARM64 architecture not supported

8.16 - oriole

OrioleDB kernel, provides bloat-free OLTP enhanced storage engine

The oriole configuration template uses OrioleDB storage engine instead of PostgreSQL’s default Heap storage, providing bloat-free, high-performance OLTP capability.


Overview

  • Config Name: oriole
  • Node Count: Single node
  • Description: OrioleDB bloat-free storage engine configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta

Usage:

./configure -c oriole [-i <primary_ip>]

Content

Source: pigsty/conf/oriole.yml

---
#==============================================================#
# File      :   oriole.yml
# Desc      :   1-node OrioleDB (OLTP Enhancement) template
# Ctime     :   2025-04-05
# Mtime     :   2025-12-28
# Docs      :   https://pigsty.io/docs/conf/oriole
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for OrioleDB Kernel,
# Which is a Patched PostgreSQL 17 fork without bloat
# tutorial: https://pigsty.io/docs/pgsql/kernel/orioledb
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c oriole
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }} ,vars: { repo_enabled: false }}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }} ,vars: { etcd_cluster: etcd  }}
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

    #----------------------------------------------#
    # OrioleDB Database Cluster
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty], extensions: [orioledb]}
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'

        # OrioleDB Ad Hoc Settings
        pg_mode: oriole                                         # oriole compatible mode
        pg_packages: [ oriole, pgsql-common ]                   # install OrioleDB kernel
        pg_libs: 'orioledb, pg_stat_statements, auto_explain'   # Load OrioleDB Extension

  vars:                               # global variables
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false           # do not overwrite node hostname on single node mode
    node_repo_modules: node,infra,pgsql # add these repos directly to the singleton node
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 17                      # OrioleDB Kernel is based on PG 17
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The oriole template uses OrioleDB storage engine, fundamentally solving PostgreSQL table bloat problems.

Key Features:

  • Bloat-free Design: Uses UNDO logs instead of Multi-Version Concurrency Control (MVCC)
  • No VACUUM Required: Eliminates performance jitter from autovacuum
  • Row-level WAL: More efficient logging and replication
  • Compressed Storage: Built-in data compression, reduces storage space

Use Cases:

  • High-frequency update OLTP workloads
  • Applications sensitive to write latency
  • Need for stable response times (eliminates VACUUM impact)
  • Large tables with frequent updates causing bloat

Usage:

-- Create table using OrioleDB storage
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    customer_id INT,
    amount DECIMAL(10,2)
) USING orioledb;

-- Existing tables cannot be directly converted, need to be rebuilt

Notes:

  • OrioleDB is based on PostgreSQL 17
  • Need to add orioledb to shared_preload_libraries
  • Some PostgreSQL features may not be fully supported
  • ARM64 architecture not supported

8.17 - supabase

Self-host Supabase using Pigsty-managed PostgreSQL, an open-source Firebase alternative

The supabase configuration template provides a reference configuration for self-hosting Supabase, using Pigsty-managed PostgreSQL as the underlying storage.

For more details, see Supabase Self-Hosting Tutorial


Overview

  • Config Name: supabase
  • Node Count: Single node
  • Description: Self-host Supabase using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, rich

Usage:

./configure -c supabase [-i <primary_ip>]

Content

Source: pigsty/conf/supabase.yml

---
#==============================================================#
# File      :   supabase.yml
# Desc      :   Pigsty configuration for self-hosting supabase
# Ctime     :   2023-09-19
# Mtime     :   2026-01-20
# Docs      :   https://pigsty.io/docs/conf/supabase
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# supabase is available on el8/el9/u22/u24/d12 with pg15,16,17,18
# tutorial: https://pigsty.io/docs/app/supabase
# Usage:
#   curl https://repo.pigsty.io/get | bash    # install pigsty
#   ./configure -c supabase   # use this supabase conf template
#   ./deploy.yml              # install pigsty & pgsql & minio
#   ./docker.yml              # install docker & docker compose
#   ./app.yml                 # launch supabase with docker compose

all:
  children:


    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        repo_enabled: false    # disable local repo

    #----------------------------------------------#
    # ETCD : https://pigsty.io/docs/etcd
    #----------------------------------------------#
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd
        etcd_safeguard: false  # enable to prevent purging running etcd instance

    #----------------------------------------------#
    # MINIO : https://pigsty.io/docs/minio
    #----------------------------------------------#
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    #----------------------------------------------#
    # PostgreSQL cluster for Supabase self-hosting
    #----------------------------------------------#
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          # supabase roles: anon, authenticated, dashboard_user
          - { name: anon           ,login: false }
          - { name: authenticated  ,login: false }
          - { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
          - { name: service_role   ,login: false ,bypassrls: true }
          # supabase users: please use the same password
          - { name: supabase_admin             ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true   ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
          - { name: authenticator              ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
          - { name: supabase_auth_admin        ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_storage_admin     ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
          - { name: supabase_functions_admin   ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false  ,roles: [ dbrole_admin ] ,createrole: true }
          - { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
          - { name: supabase_etl_admin         ,password: 'DBUser.Supa' ,replication: true ,roles: [ pg_read_all_data, dbrole_readonly ]}
          - { name: supabase_read_only_user    ,password: 'DBUser.Supa' ,bypassrls: true ,roles:   [ pg_read_all_data, dbrole_readonly ]}
        pg_databases:
          - name: postgres
            baseline: supabase.sql
            owner: supabase_admin
            comment: supabase postgres database
            schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
            extensions:
              - { name: pgcrypto         ,schema: extensions } # cryptographic functions
              - { name: pg_net           ,schema: extensions } # async HTTP
              - { name: pgjwt            ,schema: extensions } # json web token API for postgres
              - { name: uuid-ossp        ,schema: extensions } # generate universally unique identifiers (UUIDs)
              - { name: pgsodium         ,schema: extensions } # pgsodium is a modern cryptography library for Postgres.
              - { name: supabase_vault   ,schema: extensions } # Supabase Vault Extension
              - { name: pg_graphql       ,schema: extensions } # pg_graphql: GraphQL support
              - { name: pg_jsonschema    ,schema: extensions } # pg_jsonschema: Validate json schema
              - { name: wrappers         ,schema: extensions } # wrappers: FDW collections
              - { name: http             ,schema: extensions } # http: allows web page retrieval inside the database.
              - { name: pg_cron          ,schema: extensions } # pg_cron: Job scheduler for PostgreSQL
              - { name: timescaledb      ,schema: extensions } # timescaledb: Enables scalable inserts and complex queries for time-series data
              - { name: pg_tle           ,schema: extensions } # pg_tle: Trusted Language Extensions for PostgreSQL
              - { name: vector           ,schema: extensions } # pgvector: the vector similarity search
              - { name: pgmq             ,schema: extensions } # pgmq: A lightweight message queue like AWS SQS and RSMQ
          - { name: supabase ,owner: supabase_admin ,comment: supabase analytics database ,schemas: [ extensions, _analytics ] }

        # supabase required extensions
        pg_libs: 'timescaledb, pgsodium, plpgsql, plpgsql_check, pg_cron, pg_net, pg_stat_statements, auto_explain, pg_wait_sampling, pg_tle, plan_filter'
        pg_extensions: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
        pg_parameters: { cron.database_name: postgres }
        pg_hba_rules: # supabase hba rules, require access from docker network
          - { user: all ,db: postgres  ,addr: intra         ,auth: pwd ,title: 'allow supabase access from intranet'    ,order: 50 }
          - { user: all ,db: postgres  ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' ,order: 50 }
        pg_crontab:
          - '00 01 * * * /pg/bin/pg-backup full'  # make a full backup every 1am
          - '*  *  * * * /pg/bin/supa-kick'       # kick supabase _analytics lag per minute: https://github.com/pgsty/pigsty/issues/581

    #----------------------------------------------#
    # Supabase
    #----------------------------------------------#
    # ./docker.yml
    # ./app.yml

    # the supabase stateless containers (default username & password: supabase/pigsty)
    supabase:
      hosts:
        10.10.10.10: {}
      vars:
        docker_enabled: true                              # enable docker on this group
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        app: supabase                                     # specify app name (supa) to be installed (in the apps)
        apps:                                             # define all applications
          supabase:                                       # the definition of supabase app
            conf:                                         # override /opt/supabase/.env

              # IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
              # https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
              JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
              ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
              SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
              PG_META_CRYPTO_KEY: your-encryption-key-32-chars-min

              DASHBOARD_USERNAME: supabase
              DASHBOARD_PASSWORD: pigsty

              # 32~64 random characters string for logflare
              LOGFLARE_PUBLIC_ACCESS_TOKEN: 1234567890abcdef1234567890abcdef
              LOGFLARE_PRIVATE_ACCESS_TOKEN: fedcba0987654321fedcba0987654321

              # postgres connection string (use the correct ip and port)
              POSTGRES_HOST: 10.10.10.10      # point to the local postgres node
              POSTGRES_PORT: 5436             # access via the 'default' service, which always route to the primary postgres
              POSTGRES_DB: postgres           # the supabase underlying database
              POSTGRES_PASSWORD: DBUser.Supa  # password for supabase_admin and multiple supabase users

              # expose supabase via domain name
              SITE_URL: https://supa.pigsty                # <------- Change This to your external domain name
              API_EXTERNAL_URL: https://supa.pigsty        # <------- Otherwise the storage api may not work!
              SUPABASE_PUBLIC_URL: https://supa.pigsty     # <------- DO NOT FORGET TO PUT IT IN infra_portal!

              # if using s3/minio as file storage
              S3_BUCKET: data
              S3_ENDPOINT: https://sss.pigsty:9000
              S3_ACCESS_KEY: s3user_data
              S3_SECRET_KEY: S3User.Data
              S3_FORCE_PATH_STYLE: true
              S3_PROTOCOL: https
              S3_REGION: stub
              MINIO_DOMAIN_IP: 10.10.10.10  # sss.pigsty domain name will resolve to this ip statically

              # if using SMTP (optional)
              #SMTP_ADMIN_EMAIL: [email protected]
              #SMTP_HOST: supabase-mail
              #SMTP_PORT: 2500
              #SMTP_USER: fake_mail_user
              #SMTP_PASS: fake_mail_password
              #SMTP_SENDER_NAME: fake_sender
              #ENABLE_ANONYMOUS_USERS: false


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra
    #----------------------------------------------#
    version: v4.0.0                       # pigsty version string
    admin_ip: 10.10.10.10                 # admin node ip address
    region: default                       # upstream mirror region: default|china|europe
    proxy_env:                            # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    certbot_sign: false                   # enable certbot to sign https certificate for infra portal
    certbot_email: [email protected]         # replace your email address to receive expiration notice
    infra_portal:                         # infra services exposed via portal
      home      : { domain: i.pigsty }    # default domain name
      pgadmin   : { domain: adm.pigsty ,endpoint: "${admin_ip}:8885" }
      bytebase  : { domain: ddl.pigsty ,endpoint: "${admin_ip}:8887" }
      #minio     : { domain: m.pigsty   ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      # Nginx / Domain / HTTPS : https://pigsty.io/docs/infra/admin/portal
      supa :                              # nginx server config for supabase
        domain: supa.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8000"      # supabase service endpoint: IP:PORT
        websocket: true                   # add websocket support
        certbot: supa.pigsty              # certbot cert name, apply with `make cert`

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: false             # do not overwrite node hostname on single node mode
    node_tune: oltp                       # node tuning specs: oltp,olap,tiny,crit
    node_etc_hosts:                       # add static domains to all nodes /etc/hosts
      - 10.10.10.10 i.pigsty sss.pigsty supa.pigsty
    node_repo_modules: node,pgsql,infra   # use pre-made local repo rather than install from upstream
    node_repo_remove: true                # remove existing node repo for node managed by pigsty
    #node_packages: [openssh-server]      # packages to be installed current nodes with latest version
    #node_timezone: Asia/Hong_Kong        # overwrite node timezone

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18                        # default postgres version
    pg_conf: oltp.yml                     # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_safeguard: false                   # prevent purging running postgres instance?
    pg_default_schemas: [ monitor, extensions ] # add new schema: exxtensions
    pg_default_extensions:                # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor     }
      - { name: pgstattuple        ,schema: monitor     }
      - { name: pg_buffercache     ,schema: monitor     }
      - { name: pageinspect        ,schema: monitor     }
      - { name: pg_prewarm         ,schema: monitor     }
      - { name: pg_visibility      ,schema: monitor     }
      - { name: pg_freespacemap    ,schema: monitor     }
      - { name: pg_wait_sampling   ,schema: monitor     }
      # move default extensions to `extensions` schema for supabase
      - { name: postgres_fdw       ,schema: extensions  }
      - { name: file_fdw           ,schema: extensions  }
      - { name: btree_gist         ,schema: extensions  }
      - { name: btree_gin          ,schema: extensions  }
      - { name: pg_trgm            ,schema: extensions  }
      - { name: intagg             ,schema: extensions  }
      - { name: intarray           ,schema: extensions  }
      - { name: pg_repack          ,schema: extensions  }

    #----------------------------------------------#
    # BACKUP : https://pigsty.io/docs/pgsql/backup
    #----------------------------------------------#
    minio_endpoint: https://sss.pigsty:9000 # explicit overwrite minio endpoint with haproxy port
    pgbackrest_method: minio              # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_repo:                      # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                              # default pgbackrest repo with local posix fs
        path: /pg/backup                  # local backup directory, `/pg/backup` by default
        retention_full_type: count        # retention full backups by count
        retention_full: 2                 # keep 2, at most 3 full backups when using local fs repo
      minio:                              # optional minio repo for pgbackrest
        type: s3                          # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty           # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1              # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql                  # minio bucket name, `pgsql` by default
        s3_key: pgbackrest                # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup      # minio user secret key for pgbackrest <------------------ HEY, DID YOU CHANGE THIS?
        s3_uri_style: path                # use path style uri for minio rather than host style
        path: /pgbackrest                 # minio backup path, default is `/pgbackrest`
        storage_port: 9000                # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                          # Enable block incremental backup
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'  <----- HEY, DID YOU CHANGE THIS?
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days
      s3:                                 # you can use cloud object storage as backup repo
        type: s3                          # Add your object storage credentials here!
        s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
        s3_region: oss-cn-beijing
        s3_bucket: <your_bucket_name>
        s3_key: <your_access_key>
        s3_key_secret: <your_secret_key>
        s3_uri_style: host
        path: /pgbackrest
        bundle: y                         # bundle small files into a single file
        bundle_limit: 20MiB               # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB               # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc          # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest           # AES encryption password, default is 'pgBackRest'
        retention_full_type: time         # retention full backup by time on minio repo
        retention_full: 14                # keep full backup for the last 14 days

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Installation Demo


Explanation

The supabase template provides a complete self-hosted Supabase solution, allowing you to run this open-source Firebase alternative on your own infrastructure.

Architecture:

  • PostgreSQL: Production-grade Pigsty-managed PostgreSQL (with HA support)
  • Docker Containers: Supabase stateless services (Auth, Storage, Realtime, Edge Functions, etc.)
  • MinIO: S3-compatible object storage for file storage and PostgreSQL backup
  • Nginx: Reverse proxy and HTTPS termination

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Supabase’s built-in database container
  • Supports PostgreSQL high availability (can be expanded to three-node cluster)
  • Installs all Supabase-required extensions (pg_net, pgjwt, pg_graphql, vector, etc.)
  • Integrated MinIO object storage for file uploads and backups
  • HTTPS support with Let’s Encrypt automatic certificates

Deployment Steps:

curl https://repo.pigsty.io/get | bash   # Download Pigsty
./configure -c supabase                   # Use supabase config template
./deploy.yml                              # Install Pigsty, PostgreSQL, MinIO
./docker.yml                              # Install Docker
./app.yml                                 # Start Supabase containers

Access:

# Supabase Studio
https://supa.pigsty   (username: supabase, password: pigsty)

# Direct PostgreSQL connection
psql postgres://supabase_admin:[email protected]:5432/postgres

Use Cases:

  • Need to self-host BaaS (Backend as a Service) platform
  • Want full control over data and infrastructure
  • Need enterprise-grade PostgreSQL HA and backups
  • Compliance or cost concerns with Supabase cloud service

Notes:

  • Must change JWT_SECRET: Use at least 32-character random string, and regenerate ANON_KEY and SERVICE_ROLE_KEY
  • Configure proper domain names (SITE_URL, API_EXTERNAL_URL)
  • Production environments should enable HTTPS (can use certbot for auto certificates)
  • Docker network needs access to PostgreSQL (172.17.0.0/16 HBA rule configured)

8.18 - HA Templates

8.19 - ha/citus

13-node Citus distributed PostgreSQL cluster, 1 coordinator + 5 worker groups with HA

The ha/citus template deploys a complete Citus distributed PostgreSQL cluster with 1 infra node, 1 coordinator group, and 5 worker groups (12 Citus nodes total), providing transparent horizontal scaling and data sharding.


Overview

  • Config Name: ha/citus
  • Node Count: 13 nodes (1 infra + 1 coordinator×2 + 5 workers×2)
  • Description: Citus distributed PostgreSQL HA cluster
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta, ha/trio

Usage:

./configure -c ha/citus

Note: 13-node template, modify IP addresses after generation


Content

Source: pigsty/conf/ha/citus.yml

---
#==============================================================#
# File      :   citus.yml
# Desc      :   13-node Citus (6-group Distributive) Config Template
# Ctime     :   2020-05-22
# Mtime     :   2025-01-20
# Docs      :   https://pigsty.io/docs/conf/citus
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# This is the config template for Citus Distributive Cluster
# tutorial: https://pigsty.io/docs/pgsql/kernel/citus
# we will use the local repo for cluster bootstrapping
#
# Topology:
#   - pg-citus0: coordinator (10.10.10.10)         VIP: 10.10.10.19
#   - pg-citus1: worker group 1 (10.10.10.21, 22)  VIP: 10.10.10.29
#   - pg-citus2: worker group 2 (10.10.10.31, 32)  VIP: 10.10.10.39
#   - pg-citus3: worker group 3 (10.10.10.41, 42)  VIP: 10.10.10.49
#   - pg-citus4: worker group 4 (10.10.10.51, 52)  VIP: 10.10.10.59
#   - pg-citus5: worker group 5 (10.10.10.61, 62)  VIP: 10.10.10.69
#   - pg-citus6: worker group 6 (10.10.10.71, 72)  VIP: 10.10.10.79
#
# Usage:
#   curl https://repo.pigsty.io/get | bash
#   ./configure -c citus
#   ./deploy.yml

all:
  children:
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 }}}
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1  }}, vars: { etcd_cluster: etcd }}
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [dbrole_admin   ] ,comment: pigsty admin user }
        pg_databases: { name: meta ,baseline: cmdb.sql comment: "pigsty meta database" ,schemas: [pigsty] ,extensions: [ postgis, vector ]}
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every day 1am

    #----------------------------------------------------------#
    # pg-citus: 6 cluster groups, 12 nodes total
    #----------------------------------------------------------#
    pg-citus:
      hosts:

        # coordinator (group 0) on infra node
        10.10.10.21: { pg_group: 0, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.29/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.22: { pg_group: 0, pg_cluster: pg-citus1 ,pg_vip_address: 10.10.10.29/24 ,pg_seq: 2, pg_role: replica }

        # worker group 2
        10.10.10.31: { pg_group: 1, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.39/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.32: { pg_group: 1, pg_cluster: pg-citus2 ,pg_vip_address: 10.10.10.39/24 ,pg_seq: 2, pg_role: replica }

        # worker group 3
        10.10.10.41: { pg_group: 2, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.49/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.42: { pg_group: 2, pg_cluster: pg-citus3 ,pg_vip_address: 10.10.10.49/24 ,pg_seq: 2, pg_role: replica }

        # worker group 4
        10.10.10.51: { pg_group: 3, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.59/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.52: { pg_group: 3, pg_cluster: pg-citus4 ,pg_vip_address: 10.10.10.59/24 ,pg_seq: 2, pg_role: replica }

        # worker group 5
        10.10.10.61: { pg_group: 4, pg_cluster: pg-citus5 ,pg_vip_address: 10.10.10.69/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.62: { pg_group: 4, pg_cluster: pg-citus5 ,pg_vip_address: 10.10.10.69/24 ,pg_seq: 2, pg_role: replica }

        # worker group 6
        10.10.10.71: { pg_group: 5, pg_cluster: pg-citus6 ,pg_vip_address: 10.10.10.79/24 ,pg_seq: 1, pg_role: primary }
        10.10.10.72: { pg_group: 5, pg_cluster: pg-citus6 ,pg_vip_address: 10.10.10.79/24 ,pg_seq: 2, pg_role: replica }

      vars:
        pg_mode: citus                            # pgsql cluster mode: citus
        pg_shard: pg-citus                        # citus shard name: pg-citus
        pg_primary_db: citus                      # primary database used by citus
        pg_dbsu_password: DBUser.Postgres         # enable dbsu password access for citus
        pg_extensions: [ citus, postgis, pgvector, topn, pg_cron, hll ]
        pg_libs: 'citus, pg_cron, pg_stat_statements'
        pg_users: [{ name: dbuser_citus ,password: DBUser.Citus ,pgbouncer: true ,roles: [ dbrole_admin ] }]
        pg_databases: [{ name: citus ,owner: dbuser_citus ,extensions: [ citus, vector, topn, pg_cron, hll ] }]
        pg_parameters:
          cron.database_name: citus
          citus.node_conninfo: 'sslrootcert=/pg/cert/ca.crt sslmode=verify-full'
        pg_hba_rules:
          - { user: 'all' ,db: all  ,addr: 127.0.0.1/32  ,auth: ssl ,title: 'all user ssl access from localhost' }
          - { user: 'all' ,db: all  ,addr: intra         ,auth: ssl ,title: 'all user ssl access from intranet'  }
        pg_vip_enabled: true
        pg_vip_interface: eth1
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every day 1am

  vars:
    #----------------------------------------------#
    # INFRA : https://pigsty.io/docs/infra/param
    #----------------------------------------------#
    version: v4.0.0
    admin_ip: 10.10.10.10
    region: default
    infra_portal:
      home : { domain: i.pigsty }

    #----------------------------------------------#
    # NODE : https://pigsty.io/docs/node/param
    #----------------------------------------------#
    nodename_overwrite: true
    node_repo_modules: node,infra,pgsql
    node_tune: oltp

    #----------------------------------------------#
    # PGSQL : https://pigsty.io/docs/pgsql/param
    #----------------------------------------------#
    pg_version: 18  # PostgreSQL 14-18
    pg_conf: oltp.yml
    pg_packages: [ pgsql-main, pgsql-common ]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Topology

ClusterNodesIP AddressesVIPRole
pg-meta110.10.10.10-Infra + CMDB
pg-citus1210.10.10.21, 2210.10.10.29Coordinator (group 0)
pg-citus2210.10.10.31, 3210.10.10.39Worker (group 1)
pg-citus3210.10.10.41, 4210.10.10.49Worker (group 2)
pg-citus4210.10.10.51, 5210.10.10.59Worker (group 3)
pg-citus5210.10.10.61, 6210.10.10.69Worker (group 4)
pg-citus6210.10.10.71, 7210.10.10.79Worker (group 5)

Architecture:

  • pg-meta: Infra node running Grafana, Prometheus, etcd, plus standalone CMDB
  • pg-citus1: Coordinator (group 0), receives queries and routes to workers, 1 primary + 1 replica
  • pg-citus2~6: Workers (group 1~5), store sharded data, each with 1 primary + 1 replica via Patroni
  • VIP: Each group has L2 VIP managed by vip-manager for transparent failover

Explanation

The ha/citus template deploys production-grade Citus cluster for large-scale horizontal scaling scenarios.

Key Features:

  • Horizontal Scaling: 5 worker groups for linear storage/compute scaling
  • High Availability: Each group with 1 primary + 1 replica, auto-failover
  • L2 VIP: Virtual IP per group, transparent failover to clients
  • SSL Encryption: Inter-node communication uses SSL certificates
  • Transparent Sharding: Data auto-distributed across workers

Pre-installed Extensions:

pg_extensions: [ citus, postgis, pgvector, topn, pg_cron, hll ]
pg_libs: 'citus, pg_cron, pg_stat_statements'

Security:

  • pg_dbsu_password enabled for Citus inter-node communication
  • HBA rules require SSL authentication
  • Inter-node uses certificate verification: sslmode=verify-full

Deployment

# 1. Download Pigsty
curl https://repo.pigsty.io/get | bash

# 2. Use ha/citus template
./configure -c ha/citus

# 3. Modify IPs and passwords
vi pigsty.yml

# 4. Deploy entire cluster
./deploy.yml

Verify after deployment:

-- Connect to coordinator
psql -h 10.10.10.29 -U dbuser_citus -d citus

-- Check worker nodes
SELECT * FROM citus_get_active_worker_nodes();

-- Check shard distribution
SELECT * FROM citus_shards;

Examples

Create Distributed Table:

-- Create table
CREATE TABLE events (
    tenant_id INT,
    event_id BIGSERIAL,
    event_time TIMESTAMPTZ DEFAULT now(),
    payload JSONB,
    PRIMARY KEY (tenant_id, event_id)
);

-- Distribute by tenant_id
SELECT create_distributed_table('events', 'tenant_id');

-- Insert (auto-routed to correct shard)
INSERT INTO events (tenant_id, payload)
VALUES (1, '{"type": "click"}');

-- Query (parallel execution)
SELECT tenant_id, count(*)
FROM events
GROUP BY tenant_id;

Create Reference Table (replicated to all nodes):

CREATE TABLE tenants (
    tenant_id INT PRIMARY KEY,
    name TEXT
);

SELECT create_reference_table('tenants');

Use Cases

  • Multi-tenant SaaS: Shard by tenant_id for data isolation and parallel queries
  • Real-time Analytics: Large-scale event data aggregation
  • Timeseries Data: Combine with TimescaleDB for massive timeseries
  • Horizontal Scaling: When single-table data exceeds single-node capacity

Notes

  • PostgreSQL Version: Citus supports PG 14~18, this template defaults to PG18
  • Distribution Column: Choose wisely (typically tenant_id or timestamp), critical for performance
  • Cross-shard Limits: Foreign keys must include distribution column, some DDL restrictions
  • Network: Configure correct pg_vip_interface (default eth1)
  • Architecture: Citus extension does not support ARM64

8.20 - ha/simu

20-node production environment simulation for large-scale deployment testing

The ha/simu configuration template is a 20-node production environment simulation, requiring a powerful host machine to run.


Overview

  • Config Name: ha/simu
  • Node Count: 20 nodes, pigsty/vagrant/spec/simu.rb
  • Description: 20-node production environment simulation, requires powerful host machine
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64

Usage:

./configure -c ha/simu [-i <primary_ip>]

Content

Source: pigsty/conf/ha/simu.yml

---
#==============================================================#
# File      :   simu.yml
# Desc      :   Pigsty Simubox: a 20 node prod simulation env
# Ctime     :   2023-07-20
# Mtime     :   2026-01-19
# Docs      :   https://pigsty.io/docs/conf/simu
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license
# Copyright :   2018-2025  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:

  children:

    #==========================================================#
    # infra: 3 nodes
    #==========================================================#
    # ./infra.yml -l infra
    # ./docker.yml -l infra (optional)
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars:
        docker_enabled: true
        node_conf: oltp         # use oltp template for infra nodes
        pg_conf: oltp.yml       # use oltp template for infra pgsql
        pg_exporters:           # bin/pgmon-add pg-meta2/pg-src2/pg-dst2
          20001: {pg_cluster: pg-meta2   ,pg_seq: 1 ,pg_host: 10.10.10.10, pg_databases: [{ name: meta }]}
          20002: {pg_cluster: pg-meta2   ,pg_seq: 2 ,pg_host: 10.10.10.11, pg_databases: [{ name: meta }]}
          20003: {pg_cluster: pg-meta2   ,pg_seq: 3 ,pg_host: 10.10.10.12, pg_databases: [{ name: meta }]}

          20004: {pg_cluster: pg-src2    ,pg_seq: 1 ,pg_host: 10.10.10.31, pg_databases: [{ name: src }]}
          20005: {pg_cluster: pg-src2    ,pg_seq: 2 ,pg_host: 10.10.10.32, pg_databases: [{ name: src }]}
          20006: {pg_cluster: pg-src2    ,pg_seq: 3 ,pg_host: 10.10.10.33, pg_databases: [{ name: src }]}

          20007: {pg_cluster: pg-dst2    ,pg_seq: 1 ,pg_host: 10.10.10.41, pg_databases: [{ name: dst }]}
          20008: {pg_cluster: pg-dst2    ,pg_seq: 2 ,pg_host: 10.10.10.42, pg_databases: [{ name: dst }]}
          20009: {pg_cluster: pg-dst2    ,pg_seq: 3 ,pg_host: 10.10.10.43, pg_databases: [{ name: dst }]}


    #==========================================================#
    # etcd: 5 nodes dedicated etcd cluster
    #==========================================================#
    # ./etcd.yml -l etcd;
    etcd:
      hosts:
        10.10.10.25: { etcd_seq: 1 }
        10.10.10.26: { etcd_seq: 2 }
        10.10.10.27: { etcd_seq: 3 }
        10.10.10.28: { etcd_seq: 4 }
        10.10.10.29: { etcd_seq: 5 }
      vars:
        etcd_cluster: etcd

    #==========================================================#
    # minio: 4 nodes dedicated minio cluster
    #==========================================================#
    # ./minio.yml -l minio;
    minio:
      hosts:
        10.10.10.21: { minio_seq: 1 }
        10.10.10.22: { minio_seq: 2 }
        10.10.10.23: { minio_seq: 3 }
        10.10.10.24: { minio_seq: 4 }
      vars:
        minio_cluster: minio
        minio_data: '/data{1...4}' # 4 node x 4 disk
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #==========================================================#
    # proxy: 2 nodes used as dedicated haproxy server
    #==========================================================#
    # ./node.yml -l proxy
    proxy:
      hosts:
        10.10.10.18: { vip_role: master }
        10.10.10.19: { vip_role: backup }
      vars:
        vip_enabled: true
        vip_address: 10.10.10.20
        vip_vrid: 20
        vip_interface: eth1
        haproxy_services:      # expose minio service : sss.pigsty:9000
          - name: minio        # [REQUIRED] service name, unique
            port: 9000         # [REQUIRED] service port, unique
            balance: leastconn # Use leastconn algorithm and minio health check
            options: [ "option httpchk", "option http-keep-alive", "http-check send meth OPTIONS uri /minio/health/live", "http-check expect status 200" ]
            servers:           # reload service with ./node.yml -t haproxy_config,haproxy_reload
              - { name: minio-1 ,ip: 10.10.10.21 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.22 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.23 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.24 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

    #==========================================================#
    # pg-meta: reuse infra node as meta cmdb
    #==========================================================#
    # ./pgsql.yml -l pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1 , pg_role: primary }
        10.10.10.11: { pg_seq: 2 , pg_role: replica }
        10.10.10.12: { pg_seq: 3 , pg_role: replica }
      vars:
        pg_cluster: pg-meta
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1
        pg_users:
          - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector}]}
          - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
          - { name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database }
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries

    #==========================================================#
    # pg-src: dedicate 3 node source cluster
    #==========================================================#
    # ./pgsql.yml -l pg-src
    pg-src:
      hosts:
        10.10.10.31: { pg_seq: 1, pg_role: primary }
        10.10.10.32: { pg_seq: 2, pg_role: replica }
        10.10.10.33: { pg_seq: 3, pg_role: replica }
      vars:
        pg_cluster: pg-src
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: src }]


    #==========================================================#
    # pg-dst: dedicate 3 node destination cluster
    #==========================================================#
    # ./pgsql.yml -l pg-dst
    pg-dst:
      hosts:
        10.10.10.41: { pg_seq: 1, pg_role: primary }
        10.10.10.42: { pg_seq: 2, pg_role: replica }
        10.10.10.43: { pg_seq: 3, pg_role: replica }
      vars:
        pg_cluster: pg-dst
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.4/24
        pg_vip_interface: eth1
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: dst } ]


    #==========================================================#
    # redis-meta: reuse the 5 etcd nodes as redis sentinel
    #==========================================================#
    # ./redis.yml -l redis-meta
    redis-meta:
      hosts:
        10.10.10.25: { redis_node: 1 , redis_instances: { 26379: {} } }
        10.10.10.26: { redis_node: 2 , redis_instances: { 26379: {} } }
        10.10.10.27: { redis_node: 3 , redis_instances: { 26379: {} } }
        10.10.10.28: { redis_node: 4 , redis_instances: { 26379: {} } }
        10.10.10.29: { redis_node: 5 , redis_instances: { 26379: {} } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 256MB
        redis_sentinel_monitor:  # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-src, host: 10.10.10.31, port: 6379 ,password: redis.src, quorum: 1 }
          - { name: redis-dst, host: 10.10.10.41, port: 6379 ,password: redis.dst, quorum: 1 }

    #==========================================================#
    # redis-src: reuse pg-src 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-src
    redis-src:
      hosts:
        10.10.10.31: { redis_node: 1 , redis_instances: {6379: {  } }}
        10.10.10.32: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.32 6379' } }}
        10.10.10.33: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.31 6379' }, 6380: { replica_of: '10.10.10.33 6379' } }}
      vars:
        redis_cluster: redis-src
        redis_password: 'redis.src'
        redis_max_memory: 64MB

    #==========================================================#
    # redis-dst: reuse pg-dst 3 nodes for redis
    #==========================================================#
    # ./redis.yml -l redis-dst
    redis-dst:
      hosts:
        10.10.10.41: { redis_node: 1 , redis_instances: {6379: {  }                               }}
        10.10.10.42: { redis_node: 2 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
        10.10.10.43: { redis_node: 3 , redis_instances: {6379: { replica_of: '10.10.10.41 6379' } }}
      vars:
        redis_cluster: redis-dst
        redis_password: 'redis.dst'
        redis_max_memory: 64MB

    #==========================================================#
    # pg-tmp: reuse proxy nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-tmp
    pg-tmp:
      hosts:
        10.10.10.18: { pg_seq: 1 ,pg_role: primary }
        10.10.10.19: { pg_seq: 2 ,pg_role: replica }
      vars:
        pg_cluster: pg-tmp
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: tmp } ]

    #==========================================================#
    # pg-etcd: reuse etcd nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-etcd
    pg-etcd:
      hosts:
        10.10.10.25: { pg_seq: 1 ,pg_role: primary }
        10.10.10.26: { pg_seq: 2 ,pg_role: replica }
        10.10.10.27: { pg_seq: 3 ,pg_role: replica }
        10.10.10.28: { pg_seq: 4 ,pg_role: replica }
        10.10.10.29: { pg_seq: 5 ,pg_role: offline }
      vars:
        pg_cluster: pg-etcd
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: etcd } ]

    #==========================================================#
    # pg-minio: reuse minio nodes as pgsql cluster
    #==========================================================#
    # ./pgsql.yml -l pg-minio
    pg-minio:
      hosts:
        10.10.10.21: { pg_seq: 1 ,pg_role: primary }
        10.10.10.22: { pg_seq: 2 ,pg_role: replica }
        10.10.10.23: { pg_seq: 3 ,pg_role: replica }
        10.10.10.24: { pg_seq: 4 ,pg_role: replica }
      vars:
        pg_cluster: pg-minio
        pg_users: [ { name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] } ]
        pg_databases: [ { name: minio } ]

    #==========================================================#
    # ferret: reuse pg-src as mongo (ferretdb)
    #==========================================================#
    # ./mongo.yml -l ferret
    ferret:
      hosts:
        10.10.10.31: { mongo_seq: 1 }
        10.10.10.32: { mongo_seq: 2 }
        10.10.10.33: { mongo_seq: 3 }
      vars:
        mongo_cluster: ferret
        mongo_pgurl: 'postgres://test:[email protected]:5432/src'


  #============================================================#
  # Global Variables
  #============================================================#
  vars:

    #==========================================================#
    # INFRA
    #==========================================================#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      minio        : { domain: m.pigsty    ,endpoint: "10.10.10.21:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty  ,endpoint: "127.0.0.1:8884" }
      pgadmin      : { domain: adm.pigsty  ,endpoint: "127.0.0.1:8885" }
      pgweb        : { domain: cli.pigsty  ,endpoint: "127.0.0.1:8886" }
      bytebase     : { domain: ddl.pigsty  ,endpoint: "127.0.0.1:8887" }
      jupyter      : { domain: lab.pigsty  ,endpoint: "127.0.0.1:8888"  , websocket: true }
      supa         : { domain: supa.pigsty ,endpoint: "10.10.10.10:8000", websocket: true }

    #==========================================================#
    # NODE
    #==========================================================#
    node_id_from_pg: true             # use nodename rather than pg identity as hostname
    node_conf: tiny                   # use small node template
    node_timezone: Asia/Hong_Kong     # use Asia/Hong_Kong Timezone
    node_dns_servers:                 # DNS servers in /etc/resolv.conf
      - 10.10.10.10
      - 10.10.10.11
    node_etc_hosts:
      - 10.10.10.10 i.pigsty
      - 10.10.10.20 sss.pigsty        # point minio service domain to the L2 VIP of proxy cluster
    node_ntp_servers:                 # NTP servers in /etc/chrony.conf
      - pool cn.pool.ntp.org iburst
      - pool 10.10.10.10 iburst
    node_admin_ssh_exchange: false    # exchange admin ssh key among node cluster

    #==========================================================#
    # PGSQL
    #==========================================================#
    pg_conf: tiny.yml
    pgbackrest_method: minio          # USE THE HA MINIO THROUGH A LOAD BALANCER
    pg_dbsu_ssh_exchange: false       # do not exchange dbsu ssh key among pgsql cluster
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `//pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days
    pg_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
      - '00 01  * * 1 /pg/bin/pg-backup'
      - '00 05 * * *  /pg/bin/pg-vaccum'
    pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
      - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }

    #==========================================================#
    # Repo
    #==========================================================#
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl
    ]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/simu template is a large-scale production environment simulation for testing and validating complex scenarios.

Architecture:

  • 2-node HA INFRA (monitoring/alerting/Nginx/DNS)
  • 5-node HA ETCD and MinIO (multi-disk)
  • 2-node Proxy (HAProxy + Keepalived VIP)
  • Multiple PostgreSQL clusters:
    • pg-meta: 2-node HA
    • pg-v12~v17: Single-node multi-version testing
    • pg-pitr: Single-node PITR testing
    • pg-test: 4-node HA
    • pg-src/pg-dst: 3+2 node replication testing
    • pg-citus: 10-node distributed cluster
  • Multiple Redis modes: primary-replica, sentinel, cluster

Use Cases:

  • Large-scale deployment testing and validation
  • High availability failover drills
  • Performance benchmarking
  • New feature preview and evaluation

Notes:

  • Requires powerful host machine (64GB+ RAM recommended)
  • Uses Vagrant virtual machines for simulation

8.21 - ha/full

Four-node complete feature demonstration environment with two PostgreSQL clusters, MinIO, Redis, etc.

The ha/full configuration template is Pigsty’s recommended sandbox demonstration environment, deploying two PostgreSQL clusters across four nodes for testing and demonstrating various Pigsty capabilities.

Most Pigsty tutorials and examples are based on this template’s sandbox environment.


Overview

  • Config Name: ha/full
  • Node Count: Four nodes
  • Description: Four-node complete feature demonstration environment with two PostgreSQL clusters, MinIO, Redis, etc.
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/trio, ha/safe, demo/demo

Usage:

./configure -c ha/full [-i <primary_ip>]

After configuration, modify the IP addresses of the other three nodes.


Content

Source: pigsty/conf/ha/full.yml

---
#==============================================================#
# File      :   full.yml
# Desc      :   Pigsty Local Sandbox 4-node Demo Config
# Ctime     :   2020-05-22
# Mtime     :   2026-01-16
# Docs      :   https://pigsty.io/docs/conf/full
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:

  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:

    # infra: monitor, alert, repo, etc..
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }
      vars:
        docker_enabled: true      # enabled docker with ./docker.yml
        #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
        #repo_extra_packages: [ pg18-main ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    # etcd cluster for HA postgres DCS
    etcd:
      hosts:
        10.10.10.10: { etcd_seq: 1 }
      vars:
        etcd_cluster: etcd

    # minio (single node, used as backup repo)
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 }
      vars:
        minio_cluster: minio
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

    # postgres cluster: pg-meta
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta     ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] }
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


    # pgsql 3 node ha cluster: pg-test
    pg-test:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }]
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/full template is Pigsty’s complete feature demonstration configuration, showcasing the collaboration of various components.

Components Overview:

ComponentNode DistributionDescription
INFRANode 1Monitoring/Alerting/Nginx/DNS
ETCDNode 1DCS Service
MinIONode 1S3-compatible Storage
pg-metaNode 1Single-node PostgreSQL
pg-testNodes 2-4Three-node HA PostgreSQL
redis-msNode 1Redis Primary-Replica Mode
redis-metaNode 2Redis Sentinel Mode
redis-testNodes 3-4Redis Native Cluster Mode

Use Cases:

  • Pigsty feature demonstration and learning
  • Development testing environments
  • Evaluating HA architecture
  • Comparing different Redis modes

Differences from ha/trio:

  • Added second PostgreSQL cluster (pg-test)
  • Added three Redis cluster mode examples
  • Infrastructure uses single node (instead of three nodes)

Notes:

  • This template is mainly for demonstration and testing; for production, refer to ha/trio or ha/safe
  • MinIO backup enabled by default; comment out related config if not needed

8.22 - ha/safe

Security-hardened HA configuration template with high-standard security best practices

The ha/safe configuration template is based on the ha/trio template, providing a security-hardened configuration with high-standard security best practices.


Overview

  • Config Name: ha/safe
  • Node Count: Three nodes (optional delayed replica)
  • Description: Security-hardened HA configuration with high-standard security best practices
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64 (some security extensions unavailable on ARM64)
  • Related: ha/trio, ha/full

Usage:

./configure -c ha/safe [-i <primary_ip>]

Security Hardening Measures

The ha/safe template implements the following security hardening:

  • Mandatory SSL Encryption: SSL enabled for both PostgreSQL and PgBouncer
  • Strong Password Policy: passwordcheck extension enforces password complexity
  • User Expiration: All users set to 20-year expiration
  • Minimal Connection Scope: Limit PostgreSQL/Patroni/PgBouncer listen addresses
  • Strict HBA Rules: Mandatory SSL authentication, admin requires certificate
  • Audit Logs: Record connection and disconnection events
  • Delayed Replica: Optional 1-hour delayed replica for recovery from mistakes
  • Critical Template: Uses crit.yml tuning template for zero data loss

Content

Source: pigsty/conf/ha/safe.yml

---
#==============================================================#
# File      :   safe.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/conf/safe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#===== SECURITY ENHANCEMENT CONFIG TEMPLATE WITH 3 NODES ======#
#   * 3 infra nodes, 3 etcd nodes, single minio node
#   * 3-instance pgsql cluster with an extra delayed instance
#   * crit.yml templates, no data loss, checksum enforced
#   * enforce ssl on postgres & pgbouncer, use postgres by default
#   * enforce an expiration date for all users (20 years by default)
#   * enforce strong password policy with passwordcheck extension
#   * enforce changing default password for all users
#   * log connections and disconnections
#   * restrict listen ip address for postgres/patroni/pgbouncer


all:
  children:

    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars: { patroni_watchdog_mode: off }

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    pg-meta: # 3 instance postgres cluster `pg-meta`
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:
        pg_cluster: pg-meta
        pg_conf: crit.yml
        pg_users:
          - { name: dbuser_meta , password: Pleas3-ChangeThisPwd ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: Make.3ure-Compl1ance  ,expire_in: 7300 ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_services:
          - { name: standby , ip: "*" ,port: 5435 , dest: default ,selector: "[]" , backup: "[? pg_role == `primary`]" }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'
        pg_listen: '${ip},${vip},${lo}'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

    # OPTIONAL delayed cluster for pg-meta
    #pg-meta-delay: # delayed instance for pg-meta (1 hour ago)
    #  hosts: { 10.10.10.13: { pg_seq: 1, pg_role: primary, pg_upstream: 10.10.10.10, pg_delay: 1h } }
    #  vars: { pg_cluster: pg-meta-delay }


  ####################################################################
  #                          Parameters                              #
  ####################################################################
  vars: # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    patroni_ssl_enabled: true         # secure patroni RestAPI communications with SSL?
    pgbouncer_sslmode: require        # pgbouncer client ssl mode: disable|allow|prefer|require|verify-ca|verify-full, disable by default
    pg_default_service_dest: postgres # default service destination to postgres instead of pgbouncer
    pgbackrest_method: minio          # pgbackrest repo method: local,minio,[user-defined...]

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    minio_users: # and configure `pgbackrest_repo` & `minio_users` accordingly
      - { access_key: dba , secret_key: S3User.DBA.Strong.Password, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: Min10.bAckup ,policy: readwrite }
    pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local: # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio: # optional minio repo for pgbackrest
        s3_key: pgbackrest            # <-------- CHANGE THIS, SAME AS `minio_users` access_key
        s3_key_secret: Min10.bAckup   # <-------- CHANGE THIS, SAME AS `minio_users` secret_key
        cipher_pass: 'pgBR.${pg_cluster}'  # <-------- CHANGE THIS, you can use cluster name as part of password
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days


    #----------------------------------#
    # Access Control
    #----------------------------------#
    # add passwordcheck extension to enforce strong password policy
    pg_libs: '$libdir/passwordcheck, pg_stat_statements, auto_explain'
    pg_extensions:
      - passwordcheck, supautils, pgsodium, pg_vault, pg_session_jwt, anonymizer, pgsmcrypto, pgauditlogtofile, pgaudit #, pgaudit17, pgaudit16, pgaudit15, pgaudit14
      - pg_auth_mon, credcheck, pgcryptokey, pg_jobmon, logerrors, login_hook, set_user, pgextwlist, pg_auditor, sslutils, noset #pg_tde #pg_snakeoil
    pg_default_roles: # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [ dbrole_readonly ]               ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [ pg_monitor, dbrole_readwrite ]  ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,expire_in: 7300                        ,comment: system superuser }
      - { name: replicator ,replication: true  ,expire_in: 7300 ,roles: [ pg_monitor, dbrole_readonly ]   ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,expire_in: 7300 ,roles: [ dbrole_admin ]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 , comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [ pg_monitor ] ,expire_in: 7300 ,pgbouncer: true ,parameters: { log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_hba_rules: # postgres host-based auth rules by default, order by `order`
      - { user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'   ,order: 100}
      - { user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident'  ,order: 150}
      - { user: '${repl}'    ,db: replication ,addr: localhost ,auth: ssl   ,title: 'replicator replication from localhost' ,order: 200}
      - { user: '${repl}'    ,db: replication ,addr: intra     ,auth: ssl   ,title: 'replicator replication from intranet'  ,order: 250}
      - { user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: ssl   ,title: 'replicator postgres db from intranet'  ,order: 300}
      - { user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password'  ,order: 350}
      - { user: '${monitor}' ,db: all         ,addr: infra     ,auth: ssl   ,title: 'monitor from infra host with password' ,order: 400}
      - { user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'    ,order: 450}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: cert  ,title: 'admin @ everywhere with ssl & cert'    ,order: 500}
      - { user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: ssl   ,title: 'pgbouncer read/write via local socket' ,order: 550}
      - { user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: ssl   ,title: 'read/write biz user via password'      ,order: 600}
      - { user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: ssl   ,title: 'allow etl offline tasks from intranet' ,order: 650}
    pgb_default_hba_rules: # pgbouncer host-based authentication rules, order by `order`
      - { user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident' ,order: 100}
      - { user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd'  ,order: 150}
      - { user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: ssl   ,title: 'monitor access via intranet with pwd'  ,order: 200}
      - { user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr'  ,order: 250}
      - { user: '${admin}'   ,db: all         ,addr: intra     ,auth: ssl   ,title: 'admin access via intranet with pwd'    ,order: 300}
      - { user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'    ,order: 350}
      - { user: 'all'        ,db: all         ,addr: intra     ,auth: ssl   ,title: 'allow all user intra access with pwd'  ,order: 400}

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    #node_selinux_mode: enforcing     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: none (skip), off (disable), zone (enable & config)

    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    #grafana_admin_username: admin
    grafana_admin_password: You.Have2Use-A_VeryStrongPassword
    grafana_view_password: DBUser.Viewer
    #pg_admin_username: dbuser_dba
    pg_admin_password: PessWorb.Should8eStrong-eNough
    #pg_monitor_username: dbuser_monitor
    pg_monitor_password: MekeSuerYour.PassWordI5secured
    #pg_replication_username: replicator
    pg_replication_password: doNotUseThis-PasswordFor.AnythingElse
    #patroni_username: postgres
    patroni_password: don.t-forget-to-change-thEs3-password
    #haproxy_admin_username: admin
    haproxy_admin_password: GneratePasswordWith-pwgen-s-16-1
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/safe template is Pigsty’s security-hardened configuration, designed for production environments with high security requirements.

Security Features Summary:

Security MeasureDescription
SSL EncryptionFull-chain SSL for PostgreSQL/PgBouncer/Patroni
Strong Passwordpasswordcheck extension enforces complexity
User ExpirationAll users expire in 20 years (expire_in: 7300)
Strict HBAAdmin remote access requires certificate
Encrypted BackupMinIO backup with AES-256-CBC encryption
Audit Logspgaudit extension for SQL audit logging
Delayed Replica1-hour delayed replica for mistake recovery

Use Cases:

  • Finance, healthcare, government sectors with high security requirements
  • Environments needing compliance audit requirements
  • Critical business with extremely high data security demands

Notes:

  • Some security extensions unavailable on ARM64 architecture, enable appropriately
  • All default passwords must be changed to strong passwords
  • Recommend using with regular security audits

8.23 - ha/trio

Three-node standard HA configuration, tolerates any single server failure

Three nodes is the minimum scale for achieving true high availability. The ha/trio template uses a three-node standard HA architecture, with INFRA, ETCD, and PGSQL all deployed across three nodes, tolerating any single server failure.


Overview

  • Config Name: ha/trio
  • Node Count: Three nodes
  • Description: Three-node standard HA architecture, tolerates any single server failure
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/dual, ha/full, ha/safe

Usage:

./configure -c ha/trio [-i <primary_ip>]

After configuration, modify placeholder IPs 10.10.10.11 and 10.10.10.12 to actual node IP addresses.


Content

Source: pigsty/conf/ha/trio.yml

---
#==============================================================#
# File      :   trio.yml
# Desc      :   Pigsty 3-node security enhance template
# Ctime     :   2020-05-23
# Mtime     :   2026-01-20
# Docs      :   https://pigsty.io/docs/conf/trio
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# 3 infra node, 3 etcd node, 3 pgsql node, and 1 minio node
all:  # top level object
  #==============================================================#
  # Clusters, Nodes, and Modules
  #==============================================================#
  children:
    #----------------------------------#
    # infra: monitor, alert, repo, etc..
    #----------------------------------#
    infra: # infra cluster for proxy, monitor, alert, etc
      hosts: # 1 for common usage, 3 nodes for production
        10.10.10.10: { infra_seq: 1 } # identity required
        10.10.10.11: { infra_seq: 2, repo_enabled: false }
        10.10.10.12: { infra_seq: 3, repo_enabled: false }
      vars:
        patroni_watchdog_mode: off # do not fencing infra

    etcd: # dcs service for postgres/patroni ha consensus
      hosts: # 1 node for testing, 3 or 5 for production
        10.10.10.10: { etcd_seq: 1 }  # etcd_seq required
        10.10.10.11: { etcd_seq: 2 }  # assign from 1 ~ n
        10.10.10.12: { etcd_seq: 3 }  # odd number please
      vars: # cluster level parameter override roles/etcd
        etcd_cluster: etcd  # mark etcd cluster name etcd
        etcd_safeguard: false # safeguard against purging
        etcd_clean: true # purge etcd during init process

    minio: # minio cluster, s3 compatible object storage
      hosts: { 10.10.10.10: { minio_seq: 1 } }
      vars: { minio_cluster: minio }

    pg-meta:  # 3 instance postgres cluster `pg-meta`
      hosts:  # pg-meta-3 is marked as offline readable replica
        10.10.10.10: { pg_seq: 1, pg_role: primary }
        10.10.10.11: { pg_seq: 2, pg_role: replica }
        10.10.10.12: { pg_seq: 3, pg_role: replica , pg_offline_query: true }
      vars:   # cluster level parameters
        pg_cluster: pg-meta
        pg_users: # https://pigsty.io/docs/pgsql/config/user
          - { name: dbuser_meta , password: DBUser.Meta ,pgbouncer: true   ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view , password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_databases:
          - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector } ] }
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1


  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:
    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      minio        : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------#
    # MinIO Related Options
    #----------------------------------#
    node_etc_hosts:
      - '${admin_ip} i.pigsty'        # static dns record that point to repo node
      - '${admin_ip} sss.pigsty'      # static dns record that point to minio
    pgbackrest_method: minio          # if you want to use minio as backup repo instead of 'local' fs, uncomment this
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for last 14 days

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

Explanation

The ha/trio template is Pigsty’s standard HA configuration, providing true automatic failover capability.

Architecture:

  • Three-node INFRA: Distributed deployment of Prometheus/Grafana/Nginx
  • Three-node ETCD: DCS majority election, tolerates single-point failure
  • Three-node PostgreSQL: One primary, two replicas, automatic failover
  • Single-node MinIO: Can be expanded to multi-node as needed

HA Guarantees:

  • Three-node ETCD tolerates one node failure, maintains majority
  • PostgreSQL primary failure triggers automatic Patroni election for new primary
  • L2 VIP follows primary, applications don’t need to modify connection config

Use Cases:

  • Minimum HA deployment for production environments
  • Critical business requiring automatic failover
  • Foundation architecture for larger scale deployments

Extension Suggestions:

  • For stronger data security, refer to ha/safe template
  • For more demo features, refer to ha/full template
  • Production environments should enable pgbackrest_method: minio for remote backup

8.24 - ha/dual

Two-node configuration, limited HA deployment tolerating specific server failure

The ha/dual template uses two-node deployment, implementing a “semi-HA” architecture with one primary and one standby. If you only have two servers, this is a pragmatic choice.


Overview

  • Config Name: ha/dual
  • Node Count: Two nodes
  • Description: Two-node limited HA deployment, tolerates specific server failure
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: ha/trio, slim

Usage:

./configure -c ha/dual [-i <primary_ip>]

After configuration, modify placeholder IP 10.10.10.11 to actual standby node IP address.


Content

Source: pigsty/conf/ha/dual.yml

---
#==============================================================#
# File      :   dual.yml
# Desc      :   Pigsty deployment example for two nodes
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/conf/dual
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


# It is recommended to use at least three nodes in production deployment.
# But sometimes, there are only two nodes available, that's dual.yml for
#
# In this setup, we have two nodes, .10 (admin_node) and .11 (pgsql_priamry):
#
# If .11 is down, .10 will take over since the dcs:etcd is still alive
# If .10 is down, .11 (pgsql primary) will still be functioning as a primary if:
#   - Only dcs:etcd is down
#   - Only pgsql is down
# if both etcd & pgsql are down (e.g. node down), the primary will still demote itself.


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, optional backup repo for pgbackrest
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres cluster 'pg-meta' with single primary instance
    pg-meta:
      hosts:
        10.10.10.10: { pg_seq: 1, pg_role: replica }
        10.10.10.11: { pg_seq: 2, pg_role: primary }  # <----- use this as primary by default
      vars:
        pg_cluster: pg-meta
        pg_databases: [ { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [ pigsty ] ,extensions: [ { name: vector }] } ]
        pg_users:
          - { name: dbuser_meta ,password: DBUser.Meta   ,pgbouncer: true ,roles: [ dbrole_admin ]    ,comment: pigsty admin user }
          - { name: dbuser_view ,password: DBUser.Viewer ,pgbouncer: true ,roles: [ dbrole_readonly ] ,comment: read-only viewer for meta database }
        pg_hba_rules:   # https://pigsty.io/docs/pgsql/config/hba
          - { user: all ,db: all ,addr: intra ,auth: pwd ,title: 'everyone intranet access with password' ,order: 800 }
        pg_crontab:     # https://pigsty.io/docs/pgsql/admin/crontab
          - '00 01 * * * /pg/bin/pg-backup full'
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

  vars:                               # global parameters
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]
    infra_portal:                     # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_remove: true                 # remove existing repo on admin node during repo bootstrap
    node_repo_remove: true            # remove existing node repo for node managed by pigsty
    repo_extra_packages: [ pg18-main ] #,pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The ha/dual template is Pigsty’s two-node limited HA configuration, designed for scenarios with only two servers.

Architecture:

  • Node A (10.10.10.10): Admin node, runs Infra + etcd + PostgreSQL replica
  • Node B (10.10.10.11): Data node, runs PostgreSQL primary only

Failure Scenario Analysis:

Failed NodeImpactAuto Recovery
Node B downPrimary switches to Node AAuto
Node A etcd downPrimary continues running (no DCS)Manual
Node A pgsql downPrimary continues runningManual
Node A complete failurePrimary degrades to standaloneManual

Use Cases:

  • Budget-limited environments with only two servers
  • Acceptable that some failure scenarios need manual intervention
  • Transitional solution before upgrading to three-node HA

Notes:

  • True HA requires at least three nodes (DCS needs majority)
  • Recommend upgrading to three-node architecture as soon as possible
  • L2 VIP requires network environment support (same broadcast domain)

8.25 - App Templates

8.26 - app/odoo

Deploy Odoo open-source ERP system using Pigsty-managed PostgreSQL

The app/odoo configuration template provides a reference configuration for self-hosting Odoo open-source ERP system, using Pigsty-managed PostgreSQL as the database.

For more details, see Odoo Deployment Tutorial


Overview

  • Config Name: app/odoo
  • Node Count: Single node
  • Description: Deploy Odoo ERP using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/odoo [-i <primary_ip>]

Content

Source: pigsty/conf/app/odoo.yml

---
#==============================================================#
# File      :   odoo.yml
# Desc      :   pigsty config for running 1-node odoo app
# Ctime     :   2025-01-11
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://pigsty.io/docs/app/odoo
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/odoo   # Use this odoo config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install odoo

all:
  children:

    # the odoo application (default username & password: admin/admin)
    odoo:
      hosts: { 10.10.10.10: {} }
      vars:
        app: odoo   # specify app name to be installed (in the apps)
        apps:       # define all applications
          odoo:     # app name, should have corresponding ~/pigsty/app/odoo folder
            file:   # optional directory to be created
              - { path: /data/odoo         ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/webdata ,state: directory, owner: 100, group: 101 }
              - { path: /data/odoo/addons  ,state: directory, owner: 100, group: 101 }
            conf:   # override /opt/<app>/.env config file
              PG_HOST: 10.10.10.10            # postgres host
              PG_PORT: 5432                   # postgres port
              PG_USERNAME: odoo               # postgres user
              PG_PASSWORD: DBUser.Odoo        # postgres password
              ODOO_PORT: 8069                 # odoo app port
              ODOO_DATA: /data/odoo/webdata   # odoo webdata
              ODOO_ADDONS: /data/odoo/addons  # odoo plugins
              ODOO_DBNAME: odoo               # odoo database name
              ODOO_VERSION: 19.0              # odoo image version

    # the odoo database
    pg-odoo:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-odoo
        pg_users:
          - { name: odoo    ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for odoo service }
          - { name: odoo_ro ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for odoo service  }
          - { name: odoo_rw ,password: DBUser.Odoo ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for odoo service }
        pg_databases:
          - { name: odoo ,owner: odoo ,revokeconn: true ,comment: odoo main database  }
        pg_hba_rules:
          - { user: all ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home  : { domain: i.pigsty }
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      odoo:                           # nginx server config for odoo
        domain: odoo.pigsty           # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8069"  # odoo service endpoint: IP:PORT
        websocket: true               # add websocket support
        certbot: odoo.pigsty          # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/odoo template provides a one-click deployment solution for Odoo open-source ERP system.

What is Odoo:

  • World’s most popular open-source ERP system
  • Covers CRM, Sales, Purchasing, Inventory, Finance, HR, and other enterprise management modules
  • Supports thousands of community and official application extensions
  • Provides web interface and mobile support

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Odoo’s built-in database
  • Supports Odoo 19.0 latest version
  • Data persisted to independent directory /data/odoo
  • Supports custom plugin directory /data/odoo/addons

Access:

# Odoo Web interface
http://odoo.pigsty:8069

# Default admin account
Username: admin
Password: admin (set on first login)

Use Cases:

  • SMB ERP systems
  • Alternative to SAP, Oracle ERP and other commercial solutions
  • Enterprise applications requiring customized business processes

Notes:

  • Odoo container runs as uid=100, gid=101, data directory needs correct permissions
  • First access requires creating database and setting admin password
  • Production environments should enable HTTPS
  • Custom modules can be installed via /data/odoo/addons

8.27 - app/dify

Deploy Dify AI application development platform using Pigsty-managed PostgreSQL

The app/dify configuration template provides a reference configuration for self-hosting Dify AI application development platform, using Pigsty-managed PostgreSQL and pgvector as vector storage.

For more details, see Dify Deployment Tutorial


Overview

  • Config Name: app/dify
  • Node Count: Single node
  • Description: Deploy Dify using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/dify [-i <primary_ip>]

Content

Source: pigsty/conf/app/dify.yml

---
#==============================================================#
# File      :   dify.yml
# Desc      :   pigsty config for running 1-node dify app
# Ctime     :   2025-02-24
# Mtime     :   2026-01-18
# Docs      :   https://pigsty.io/docs/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#
# Last Verified Dify Version: v1.8.1 on 2025-0908
# tutorial: https://pigsty.io/docs/app/dify
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/dify   # use this dify config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install dify with docker-compose
#
# To replace domain name:
#   sed -ie 's/dify.pigsty/dify.pigsty.cc/g' pigsty.yml


all:
  children:

    # the dify application
    dify:
      hosts: { 10.10.10.10: {} }
      vars:
        app: dify   # specify app name to be installed (in the apps)
        apps:       # define all applications
          dify:     # app name, should have corresponding ~/pigsty/app/dify folder
            file:   # data directory to be created
              - { path: /data/dify ,state: directory ,mode: 0755 }
            conf:   # override /opt/dify/.env config file

              # change domain, mirror, proxy, secret key
              NGINX_SERVER_NAME: dify.pigsty
              # A secret key for signing and encryption, gen with `openssl rand -base64 42` (CHANGE PASSWORD!)
              SECRET_KEY: sk-somerandomkey
              # expose DIFY nginx service with port 5001 by default
              DIFY_PORT: 5001
              # where to store dify files? the default is ./volume, we'll use another volume created above
              DIFY_DATA: /data/dify

              # proxy and mirror settings
              #PIP_MIRROR_URL: https://pypi.tuna.tsinghua.edu.cn/simple
              #SANDBOX_HTTP_PROXY: http://10.10.10.10:12345
              #SANDBOX_HTTPS_PROXY: http://10.10.10.10:12345

              # database credentials
              DB_USERNAME: dify
              DB_PASSWORD: difyai123456
              DB_HOST: 10.10.10.10
              DB_PORT: 5432
              DB_DATABASE: dify
              VECTOR_STORE: pgvector
              PGVECTOR_HOST: 10.10.10.10
              PGVECTOR_PORT: 5432
              PGVECTOR_USER: dify
              PGVECTOR_PASSWORD: difyai123456
              PGVECTOR_DATABASE: dify
              PGVECTOR_MIN_CONNECTION: 2
              PGVECTOR_MAX_CONNECTION: 10

    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dify ,password: difyai123456 ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: dify superuser }
        pg_databases:
          - { name: dify        ,owner: dify ,comment: dify main database  }
          - { name: dify_plugin ,owner: dify ,comment: dify plugin daemon database }
        pg_hba_rules:
          - { user: dify ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow dify access from local docker network' }
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # domain names and upstream servers
      home   :  { domain: i.pigsty }
      #minio :  { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      dify:                            # nginx server config for dify
        domain: dify.pigsty            # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5001"   # dify service endpoint: IP:PORT
        websocket: true                # add websocket support
        certbot: dify.pigsty           # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/dify template provides a one-click deployment solution for Dify AI application development platform.

What is Dify:

  • Open-source LLM application development platform
  • Supports RAG, Agent, Workflow and other AI application modes
  • Provides visual Prompt orchestration and application building interface
  • Supports multiple LLM backends (OpenAI, Claude, local models, etc.)

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Dify’s built-in database
  • Uses pgvector as vector storage (replaces Weaviate/Qdrant)
  • Supports HTTPS and custom domain names
  • Data persisted to independent directory /data/dify

Access:

# Dify Web interface
http://dify.pigsty:5001

# Or via Nginx proxy
https://dify.pigsty

Use Cases:

  • Enterprise internal AI application development platform
  • RAG knowledge base Q&A systems
  • LLM-driven automated workflows
  • AI Agent development and deployment

Notes:

  • Must change SECRET_KEY, generate with openssl rand -base64 42
  • Configure LLM API keys (e.g., OpenAI API Key)
  • Docker network needs access to PostgreSQL (172.17.0.0/16 HBA rule configured)
  • Recommend configuring proxy to accelerate Python package downloads

8.28 - app/electric

Deploy Electric real-time sync service using Pigsty-managed PostgreSQL

The app/electric configuration template provides a reference configuration for deploying Electric SQL real-time sync service, enabling real-time data synchronization from PostgreSQL to clients.


Overview

  • Config Name: app/electric
  • Node Count: Single node
  • Description: Deploy Electric real-time sync using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/electric [-i <primary_ip>]

Content

Source: pigsty/conf/app/electric.yml

---
#==============================================================#
# File      :   electric.yml
# Desc      :   pigsty config for running 1-node electric app
# Ctime     :   2025-03-29
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/electric
# quick start: https://electric-sql.com/docs/quickstart
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap                 # prepare local repo & ansible
# ./configure -c app/electric # use this dify config template
# vi pigsty.yml               # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml                # install pigsty & pgsql & minio
# ./docker.yml                # install docker & docker-compose
# ./app.yml                   # install dify with docker-compose

all:
  children:
    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:

        app: electric
        apps:       # define all applications
          electric: # app name, should have corresponding ~/pigsty/app/electric folder
            conf:   # override /opt/electric/.env config file : https://electric-sql.com/docs/api/config
              DATABASE_URL: 'postgresql://electric:[email protected]:5432/electric?sslmode=require'
              ELECTRIC_PORT: 8002
              ELECTRIC_PROMETHEUS_PORT: 8003
              ELECTRIC_INSECURE: true
              #ELECTRIC_SECRET: 1U6ItbhoQb4kGUU5wXBLbxvNf

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: electric ,password: DBUser.Electric ,pgbouncer: true , replication: true ,roles: [dbrole_admin] ,comment: electric main user }
        pg_databases: [{ name: electric , owner: electric }]
        pg_hba_rules:
          - { user: electric , db: replication ,addr: infra ,auth: ssl ,title: 'allow electric intranet/docker ssl access' }

  #==============================================================#
  # Global Parameters
  #==============================================================#
  vars:

    #----------------------------------#
    # Meta Data
    #----------------------------------#
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]
    infra_portal:                     # domain names and upstream servers
      home : { domain: i.pigsty }
      electric:
        domain: elec.pigsty
        endpoint: "${admin_ip}:8002"
        websocket: true               # apply free ssl cert with certbot: make cert
        certbot: odoo.pigsty          # <----- replace with your own domain name!

    #----------------------------------#
    # Safe Guard
    #----------------------------------#
    # you can enable these flags after bootstrap, to prevent purging running etcd / pgsql instances
    etcd_safeguard: false             # prevent purging running etcd instance?
    pg_safeguard: false               # prevent purging running postgres instance? false by default

    #----------------------------------#
    # Repo, Node, Packages
    #----------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    pg_version: 18                    # default postgres version
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/electric template provides a one-click deployment solution for Electric SQL real-time sync service.

What is Electric:

  • PostgreSQL to client real-time data sync service
  • Supports Local-first application architecture
  • Real-time syncs data changes via logical replication
  • Provides HTTP API for frontend application consumption

Key Features:

  • Uses Pigsty-managed PostgreSQL as data source
  • Captures data changes via Logical Replication
  • Supports SSL encrypted connections
  • Built-in Prometheus metrics endpoint

Access:

# Electric API endpoint
http://elec.pigsty:8002

# Prometheus metrics
http://elec.pigsty:8003/metrics

Use Cases:

  • Building Local-first applications
  • Real-time data sync to clients
  • Mobile and PWA data synchronization
  • Real-time updates for collaborative applications

Notes:

  • Electric user needs replication permission
  • PostgreSQL logical replication must be enabled
  • Production environments should use SSL connection (configured with sslmode=require)

8.29 - app/maybe

Deploy Maybe personal finance management system using Pigsty-managed PostgreSQL

The app/maybe configuration template provides a reference configuration for deploying Maybe open-source personal finance management system, using Pigsty-managed PostgreSQL as the database.


Overview

  • Config Name: app/maybe
  • Node Count: Single node
  • Description: Deploy Maybe finance management using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/maybe [-i <primary_ip>]

Content

Source: pigsty/conf/app/maybe.yml

---
#==============================================================#
# File      :   maybe.yml
# Desc      :   pigsty config for running 1-node maybe app
# Ctime     :   2025-09-08
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/app/maybe
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://pigsty.io/docs/app/maybe
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/maybe  # Use this maybe config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install maybe

all:
  children:

    # the maybe application (personal finance management)
    maybe:
      hosts: { 10.10.10.10: {} }
      vars:
        app: maybe   # specify app name to be installed (in the apps)
        apps:        # define all applications
          maybe:     # app name, should have corresponding ~/pigsty/app/maybe folder
            file:    # optional directory to be created
              - { path: /data/maybe             ,state: directory ,mode: 0755 }
              - { path: /data/maybe/storage     ,state: directory ,mode: 0755 }
            conf:    # override /opt/<app>/.env config file
              # Core Configuration
              MAYBE_VERSION: latest                    # Maybe image version
              MAYBE_PORT: 5002                         # Port to expose Maybe service
              MAYBE_DATA: /data/maybe                  # Data directory for Maybe
              APP_DOMAIN: maybe.pigsty                 # Domain name for Maybe
              
              # REQUIRED: Generate with: openssl rand -hex 64
              SECRET_KEY_BASE: sk-somerandomkey        # Secret key for maybe
              
              # Database Configuration
              DB_HOST: 10.10.10.10                    # PostgreSQL host
              DB_PORT: 5432                           # PostgreSQL port
              DB_USERNAME: maybe                      # PostgreSQL username
              DB_PASSWORD: MaybeFinance2025           # PostgreSQL password (CHANGE THIS!)
              DB_DATABASE: maybe_production           # PostgreSQL database name
              
              # Optional: API Integration
              #SYNTH_API_KEY:                         # Get from synthfinance.com

    # the maybe database
    pg-maybe:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-maybe
        pg_users:
          - { name: maybe    ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_admin ] ,createdb: true ,comment: admin user for maybe service }
          - { name: maybe_ro ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readonly ]  ,comment: read only user for maybe service  }
          - { name: maybe_rw ,password: MaybeFinance2025 ,pgbouncer: true ,roles: [ dbrole_readwrite ] ,comment: read write user for maybe service }
        pg_databases:
          - { name: maybe_production ,owner: maybe ,revokeconn: true ,comment: maybe main database  }
        pg_hba_rules:
          - { user: maybe ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow maybe access from local docker network' }
          - { user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes' }
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am

    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    #minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345

    infra_portal:                     # infra services exposed via portal
      home  : { domain: i.pigsty }    # default domain name
      minio : { domain: m.pigsty ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      maybe:                          # nginx server config for maybe
        domain: maybe.pigsty          # REPLACE WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5002"  # maybe service endpoint: IP:PORT
        websocket: true               # add websocket support

    repo_enabled: false
    node_repo_modules: node,infra,pgsql

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root

...

Explanation

The app/maybe template provides a one-click deployment solution for Maybe open-source personal finance management system.

What is Maybe:

  • Open-source personal and family finance management system
  • Supports multi-account, multi-currency asset tracking
  • Provides investment portfolio analysis and net worth calculation
  • Beautiful modern web interface

Key Features:

  • Uses Pigsty-managed PostgreSQL instead of Maybe’s built-in database
  • Data persisted to independent directory /data/maybe
  • Supports HTTPS and custom domain names
  • Multi-user permission management

Access:

# Maybe Web interface
http://maybe.pigsty:5002

# Or via Nginx proxy
https://maybe.pigsty

Use Cases:

  • Personal or family finance management
  • Investment portfolio tracking and analysis
  • Multi-account asset aggregation
  • Alternative to commercial services like Mint, YNAB

Notes:

  • Must change SECRET_KEY_BASE, generate with openssl rand -hex 64
  • First access requires registering an admin account
  • Optionally configure Synth API for stock price data

8.30 - app/teable

Deploy Teable open-source Airtable alternative using Pigsty-managed PostgreSQL

The app/teable configuration template provides a reference configuration for deploying Teable open-source no-code database, using Pigsty-managed PostgreSQL as the database.


Overview

  • Config Name: app/teable
  • Node Count: Single node
  • Description: Deploy Teable using Pigsty-managed PostgreSQL
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/teable [-i <primary_ip>]

Content

Source: pigsty/conf/app/teable.yml

---
#==============================================================#
# File      :   teable.yml
# Desc      :   pigsty config for running 1-node teable app
# Ctime     :   2025-02-24
# Mtime     :   2025-12-12
# Docs      :   https://pigsty.io/docs/app/odoo
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://pigsty.io/docs/app/teable
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./bootstrap               # prepare local repo & ansible
# ./configure -c app/teable # use this teable config template
# vi pigsty.yml             # IMPORTANT: CHANGE CREDENTIALS!!
# ./deploy.yml              # install pigsty & pgsql & minio
# ./docker.yml              # install docker & docker-compose
# ./app.yml                 # install teable with docker-compose
#
# To replace domain name:
#   sed -ie 's/teable.pigsty/teable.pigsty.cc/g' pigsty.yml

all:
  children:

    # the teable application
    teable:
      hosts: { 10.10.10.10: {} }
      vars:
        app: teable   # specify app name to be installed (in the apps)
        apps:         # define all applications
          teable:     # app name, ~/pigsty/app/teable folder
            conf:     # override /opt/teable/.env config file
              # https://github.com/teableio/teable/blob/develop/dockers/examples/standalone/.env
              # https://help.teable.io/en/deploy/env
              POSTGRES_HOST: "10.10.10.10"
              POSTGRES_PORT: "5432"
              POSTGRES_DB: "teable"
              POSTGRES_USER: "dbuser_teable"
              POSTGRES_PASSWORD: "DBUser.Teable"
              PRISMA_DATABASE_URL: "postgresql://dbuser_teable:[email protected]:5432/teable"
              PUBLIC_ORIGIN: "http://tea.pigsty"
              PUBLIC_DATABASE_PROXY: "10.10.10.10:5432"
              TIMEZONE: "UTC"

              # Need to support sending emails to enable the following configurations
              #BACKEND_MAIL_HOST: smtp.teable.io
              #BACKEND_MAIL_PORT: 465
              #BACKEND_MAIL_SECURE: true
              #BACKEND_MAIL_SENDER: noreply.teable.io
              #BACKEND_MAIL_SENDER_NAME: Teable
              #BACKEND_MAIL_AUTH_USER: username
              #BACKEND_MAIL_AUTH_PASS: password


    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - { name: dbuser_teable ,password: DBUser.Teable ,pgbouncer: true ,roles: [ dbrole_admin ] ,superuser: true ,comment: teable superuser }
        pg_databases:
          - { name: teable ,owner: dbuser_teable ,comment: teable database }
        pg_hba_rules:
          - { user: teable ,db: all ,addr: 172.17.0.0/16  ,auth: pwd ,title: 'allow teable access from local docker network' }
        pg_crontab: [ '00 01 * * * /pg/bin/pg-backup full' ] # make a full backup every 1am
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    node_tune: oltp                   # node tuning specs: oltp,olap,tiny,crit
    pg_conf: oltp.yml                 # pgsql tuning specs: {oltp,olap,tiny,crit}.yml

    docker_enabled: true              # enable docker on app group
    #docker_registry_mirrors: ["https://docker.1panel.live","https://docker.1ms.run","https://docker.xuanyuan.me","https://registry-1.docker.io"]

    proxy_env:                        # global proxy env when downloading packages & pull docker images
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
      #http_proxy:  127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
      #https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
      #all_proxy:   127.0.0.1:12345
    infra_portal:                        # domain names and upstream servers
      home   : { domain: i.pigsty }
      #minio : { domain: m.pigsty    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }

      teable:                            # nginx server config for teable
        domain: tea.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:8890"     # teable service endpoint: IP:PORT
        websocket: true                  # add websocket support
        certbot: tea.pigsty              # certbot cert name, apply with `make cert`

    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    node_etc_hosts: [ '${admin_ip} i.pigsty sss.pigsty' ]
    pg_version: 18

    #----------------------------------------------#
    # PASSWORD : https://pigsty.io/docs/setup/security/
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/teable template provides a one-click deployment solution for Teable open-source no-code database.

What is Teable:

  • Open-source Airtable alternative
  • No-code database built on PostgreSQL
  • Supports table, kanban, calendar, form, and other views
  • Provides API and automation workflows

Key Features:

  • Uses Pigsty-managed PostgreSQL as underlying storage
  • Data is stored in real PostgreSQL tables
  • Supports direct SQL queries
  • Can integrate with other PostgreSQL tools and extensions

Access:

# Teable Web interface
http://tea.pigsty:8890

# Or via Nginx proxy
https://tea.pigsty

# Direct SQL access to underlying data
psql postgresql://dbuser_teable:[email protected]:5432/teable

Use Cases:

  • Need Airtable-like functionality but want to self-host
  • Team collaboration data management
  • Need both API and SQL access
  • Want data stored in real PostgreSQL

Notes:

  • Teable user needs superuser privileges
  • Must configure PUBLIC_ORIGIN to external access address
  • Supports email notifications (optional SMTP configuration)

8.31 - app/registry

Deploy Docker Registry image proxy and private registry using Pigsty

The app/registry configuration template provides a reference configuration for deploying Docker Registry as an image proxy, usable as Docker Hub mirror acceleration or private image registry.


Overview

  • Config Name: app/registry
  • Node Count: Single node
  • Description: Deploy Docker Registry image proxy and private registry
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c app/registry [-i <primary_ip>]

Content

Source: pigsty/conf/app/registry.yml

---
#==============================================================#
# File      :   registry.yml
# Desc      :   pigsty config for running Docker Registry Mirror
# Ctime     :   2025-07-01
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/app/registry
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# tutorial: https://doc.pgsty.com/app/registry
# how to use this template:
#
#  curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
# ./configure -c app/registry   # use this registry config template
# vi pigsty.yml                 # IMPORTANT: CHANGE DOMAIN & CREDENTIALS!
# ./deploy.yml                  # install pigsty
# ./docker.yml                  # install docker & docker-compose
# ./app.yml                     # install registry with docker-compose
#
# To replace domain name:
#   sed -ie 's/registry.pigsty/registry.your-domain.com/g' pigsty.yml

#==============================================================#
# Usage Instructions:
#==============================================================#
#
# 1. Deploy the registry:
#    ./configure -c conf/app/registry.yml && ./deploy.yml && ./docker.yml && ./app.yml
#
# 2. Configure Docker clients to use the mirror:
#    Edit /etc/docker/daemon.json:
#    {
#      "registry-mirrors": ["https://registry.your-domain.com"],
#      "insecure-registries": ["registry.your-domain.com"]
#    }
#
# 3. Restart Docker daemon:
#    sudo systemctl restart docker
#
# 4. Test the registry:
#    docker pull nginx:latest  # This will now use your mirror
#
# 5. Access the web UI (optional):
#    https://registry-ui.your-domain.com
#
# 6. Monitor the registry:
#    curl https://registry.your-domain.com/v2/_catalog
#    curl https://registry.your-domain.com/v2/nginx/tags/list
#
#==============================================================#


all:
  children:

    # the docker registry mirror application
    registry:
      hosts: { 10.10.10.10: {} }
      vars:
        app: registry                    # specify app name to be installed
        apps:                            # define all applications
          registry:
            file:                        # create data directory for registry
              - { path: /data/registry ,state: directory ,mode: 0755 }
            conf:                        # environment variables for registry
              REGISTRY_DATA: /data/registry
              REGISTRY_PORT: 5000
              REGISTRY_UI_PORT: 5080
              REGISTRY_STORAGE_DELETE_ENABLED: true
              REGISTRY_LOG_LEVEL: info
              REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io
              REGISTRY_PROXY_TTL: 168h

    # basic infrastructure
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
    etcd:  { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

  vars:
    #----------------------------------------------#
    # INFRA : https://doc.pgsty.com/infra/param
    #----------------------------------------------#
    version: v4.0.0                      # pigsty version string
    admin_ip: 10.10.10.10                # admin node ip address
    region: default                      # upstream mirror region: default,china,europe
    infra_portal:                        # infra services exposed via portal
      home : { domain: i.pigsty }        # default domain name

      # Docker Registry Mirror service configuration
      registry:                          # nginx server config for registry
        domain: d.pigsty                 # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5000"     # registry service endpoint: IP:PORT
        websocket: false                 # registry doesn't need websocket
        certbot: d.pigsty                # certbot cert name, apply with `make cert`

      # Optional: Registry Web UI
      registry-ui:                       # nginx server config for registry UI
        domain: dui.pigsty               # REPLACE IT WITH YOUR OWN DOMAIN!
        endpoint: "10.10.10.10:5080"     # registry UI endpoint: IP:PORT
        websocket: false                 # UI doesn't need websocket
        certbot: d.pigsty                # certbot cert name for UI

    #----------------------------------------------#
    # NODE : https://doc.pgsty.com/node/param
    #----------------------------------------------#
    repo_enabled: false
    node_repo_modules: node,infra,pgsql
    node_tune: oltp                     # node tuning specs: oltp,olap,tiny,crit

    #----------------------------------------------#
    # PGSQL : https://doc.pgsty.com/pgsql/param
    #----------------------------------------------#
    pg_version: 18                      # Default PostgreSQL Major Version is 18
    pg_conf: oltp.yml                   # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
    pg_packages: [ pgsql-main, pgsql-common ]   # pg kernel and common utils
    #pg_extensions: [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The app/registry template provides a one-click deployment solution for Docker Registry image proxy.

What is Registry:

  • Docker’s official image registry implementation
  • Can serve as Docker Hub pull-through cache
  • Can also serve as private image registry
  • Supports image caching and local storage

Key Features:

  • Acts as proxy cache for Docker Hub to accelerate image pulls
  • Caches images to local storage /data/registry
  • Provides Web UI to view cached images
  • Supports custom cache expiration time

Configure Docker Client:

# Edit /etc/docker/daemon.json
{
  "registry-mirrors": ["https://d.pigsty"],
  "insecure-registries": ["d.pigsty"]
}

# Restart Docker
sudo systemctl restart docker

Access:

# Registry API
https://d.pigsty/v2/_catalog

# Web UI
http://dui.pigsty:5080

# Pull images (automatically uses proxy)
docker pull nginx:latest

Use Cases:

  • Accelerate Docker image pulls (especially in mainland China)
  • Reduce external network dependency
  • Enterprise internal private image registry
  • Offline environment image distribution

Notes:

  • Requires sufficient disk space to store cached images
  • Default cache TTL is 7 days (REGISTRY_PROXY_TTL: 168h)
  • Can configure HTTPS certificates (via certbot)

8.32 - Misc Templates

8.33 - demo/el

Configuration template optimized for Enterprise Linux (RHEL/Rocky/Alma)

The demo/el configuration template is optimized for Enterprise Linux family distributions (RHEL, Rocky Linux, Alma Linux, Oracle Linux).


Overview

  • Config Name: demo/el
  • Node Count: Single node
  • Description: Enterprise Linux optimized configuration template
  • OS Distro: el8, el9, el10
  • OS Arch: x86_64, aarch64
  • Related: meta, demo/debian

Usage:

./configure -c demo/el [-i <primary_ip>]

Content

Source: pigsty/conf/demo/el.yml

---
#==============================================================#
# File      :   el.yml
# Desc      :   Default parameters for EL System in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2026-01-14
# Docs      :   https://pigsty.io/docs/conf/el
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://pigsty.io/docs/pgsql/config/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://pigsty.io/docs/pgsql/config/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://pigsty.io/docs/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            #options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://pigsty.io/docs/pgsql/ext/
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://pigsty.io/docs/pgsql/config/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        pg_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty'  }} # used by intranet nodes
      - { name: pigsty-infra   ,description: 'Pigsty INFRA'       ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/infra/$basearch' ,china: 'https://repo.pigsty.cc/yum/infra/$basearch' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PGSQL'       ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/pgsql/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/pgsql/el$releasever.$basearch' }}
      - { name: nginx          ,description: 'Nginx Repo'         ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://nginx.org/packages/rhel/$releasever/$basearch/' }}
      - { name: docker-ce      ,description: 'Docker CE'          ,module: infra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable'    ,china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable' ,europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
      - { name: baseos         ,description: 'EL 8+ BaseOS'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/BaseOS/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/BaseOS/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/BaseOS/$basearch/os/'     }}
      - { name: appstream      ,description: 'EL 8+ AppStream'    ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/AppStream/$basearch/os/'  ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/AppStream/$basearch/os/'      ,europe: 'https://mirrors.xtom.de/rocky/$releasever/AppStream/$basearch/os/'  }}
      - { name: extras         ,description: 'EL 8+ Extras'       ,module: node    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/extras/$basearch/os/'     ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/extras/$basearch/os/'         ,europe: 'https://mirrors.xtom.de/rocky/$releasever/extras/$basearch/os/'     }}
      - { name: powertools     ,description: 'EL 8 PowerTools'    ,module: node    ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/PowerTools/$basearch/os/' ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/PowerTools/$basearch/os/'     ,europe: 'https://mirrors.xtom.de/rocky/$releasever/PowerTools/$basearch/os/' }}
      - { name: crb            ,description: 'EL 9 CRB'           ,module: node    ,releases: [  9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://dl.rockylinux.org/pub/rocky/$releasever/CRB/$basearch/os/'        ,china: 'https://mirrors.aliyun.com/rockylinux/$releasever/CRB/$basearch/os/'            ,europe: 'https://mirrors.xtom.de/rocky/$releasever/CRB/$basearch/os/'        }}
      - { name: epel           ,description: 'EL 8+ EPEL'         ,module: node    ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever/Everything/$basearch/'         ,europe: 'https://mirrors.xtom.de/epel/$releasever/Everything/$basearch/'     }}
      - { name: epel           ,description: 'EL 10 EPEL'         ,module: node    ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://mirrors.edge.kernel.org/fedora-epel/$releasever.0/Everything/$basearch/' ,china: 'https://mirrors.aliyun.com/epel/$releasever.0/Everything/$basearch/'     ,europe: 'https://mirrors.xtom.de/epel/$releasever.0/Everything/$basearch/'   }}
      - { name: pgdg-common    ,description: 'PostgreSQL Common'  ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-el8fix    ,description: 'PostgreSQL EL8FIX'  ,module: pgsql   ,releases: [8     ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-centos8-sysupdates/redhat/rhel-8-$basearch/'  }}
      - { name: pgdg-el9fix    ,description: 'PostgreSQL EL9FIX'  ,module: pgsql   ,releases: [  9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky9-sysupdates/redhat/rhel-9-$basearch/'   }}
      - { name: pgdg-el10fix   ,description: 'PostgreSQL EL10FIX' ,module: pgsql   ,releases: [    10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/common/pgdg-rocky10-sysupdates/redhat/rhel-10-$basearch/' }}
      - { name: pgdg13         ,description: 'PostgreSQL 13'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14         ,description: 'PostgreSQL 14'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15         ,description: 'PostgreSQL 15'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16         ,description: 'PostgreSQL 16'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17         ,description: 'PostgreSQL 17'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18         ,description: 'PostgreSQL 18'      ,module: pgsql   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch'          ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/18/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg-beta      ,description: 'PostgreSQL Testing' ,module: beta    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/testing/19/redhat/rhel-$releasever-$basearch'  }}
      - { name: pgdg-extras    ,description: 'PostgreSQL Extra'   ,module: extra   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/extras/redhat/rhel-$releasever-$basearch'      }}
      - { name: pgdg13-nonfree ,description: 'PostgreSQL 13+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/13/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg14-nonfree ,description: 'PostgreSQL 14+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/14/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg15-nonfree ,description: 'PostgreSQL 15+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/15/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg16-nonfree ,description: 'PostgreSQL 16+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/16/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg17-nonfree ,description: 'PostgreSQL 17+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/17/redhat/rhel-$releasever-$basearch' }}
      - { name: pgdg18-nonfree ,description: 'PostgreSQL 18+'     ,module: extra   ,releases: [8,9,10] ,arch: [x86_64         ] ,baseurl: { default: 'https://download.postgresql.org/pub/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,china: 'https://mirrors.aliyun.com/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' ,europe: 'https://mirrors.xtom.de/postgresql/repos/yum/non-free/18/redhat/rhel-$releasever-$basearch' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/el/$releasever/$basearch'  }}
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/percona/el$releasever.$basearch' ,china: 'https://repo.pigsty.cc/yum/percona/el$releasever.$basearch' ,origin: 'http://repo.percona.com/ppg-18.1/yum/release/$releasever/RPMS/$basearch'  }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/yum/mssql/el$releasever.$basearch', china: 'https://repo.pigsty.cc/yum/mssql/el$releasever.$basearch' , origin: 'https://download.copr.fedorainfracloud.org/results/wiltondb/wiltondb/epel-$releasever-$basearch/' }}
      - { name: groonga        ,description: 'Groonga'            ,module: groonga ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/almalinux/$releasever/$basearch/' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/yum/mysql-8.4-community/el/$releasever/$basearch/' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' ,china: 'https://mirrors.aliyun.com/mongodb/yum/redhat/$releasever/mongodb-org/8.0/$basearch/' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpmfind.net/linux/remi/enterprise/$releasever/redis72/$basearch/' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://rpm.grafana.com', china: 'https://mirrors.aliyun.com/grafana/yum/' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/rpm/', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/rpm/' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/el/$releasever/$basearch' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [8,9   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/el/$releasever/$basearch' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [8,9,10] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/rpm/stable/', china: 'https://mirrors.aliyun.com/clickhouse/rpm/stable/' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node_exporter,blackbox_exporter,nginx_exporter,pg_exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib,yum,audit,bind-utils,readline,vim-minimal,node_exporter,grubby,openssh-server,openssh-clients,chkconfig,vector
    node_uv_env: /data/venv           # uv venv path, empty string to skip
    node_pip_packages: ''             # pip packages to install in uv venv

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: none (skip), off (disable), zone (enable & config)
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 26                   # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 17 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: norm                      # shared rto mode for patroni & haproxy: fast,norm,safe,wide
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
    #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
    #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

Explanation

The demo/el template is optimized for Enterprise Linux family distributions.

Supported Distributions:

  • RHEL 8/9/10
  • Rocky Linux 8/9/10
  • Alma Linux 8/9/10
  • Oracle Linux 8/9

Key Features:

  • Uses EPEL and PGDG repositories
  • Optimized for YUM/DNF package manager
  • Supports EL-specific package names

Use Cases:

  • Enterprise production environments (RHEL/Rocky/Alma recommended)
  • Long-term support and stability requirements
  • Environments using Red Hat ecosystem

8.34 - demo/debian

Configuration template optimized for Debian/Ubuntu

The demo/debian configuration template is optimized for Debian and Ubuntu distributions.


Overview

  • Config Name: demo/debian
  • Node Count: Single node
  • Description: Debian/Ubuntu optimized configuration template
  • OS Distro: d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta, demo/el

Usage:

./configure -c demo/debian [-i <primary_ip>]

Content

Source: pigsty/conf/demo/debian.yml

---
#==============================================================#
# File      :   debian.yml
# Desc      :   Default parameters for Debian/Ubuntu in Pigsty
# Ctime     :   2020-05-22
# Mtime     :   2026-01-14
# Docs      :   https://pigsty.io/docs/conf/debian
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


#==============================================================#
#                        Sandbox (4-node)                      #
#==============================================================#
# admin user : vagrant  (nopass ssh & sudo already set)        #
# 1.  meta    :    10.10.10.10     (2 Core | 4GB)    pg-meta   #
# 2.  node-1  :    10.10.10.11     (1 Core | 1GB)    pg-test-1 #
# 3.  node-2  :    10.10.10.12     (1 Core | 1GB)    pg-test-2 #
# 4.  node-3  :    10.10.10.13     (1 Core | 1GB)    pg-test-3 #
# (replace these ip if your 4-node env have different ip addr) #
# VIP 2: (l2 vip is available inside same LAN )                #
#     pg-meta --->  10.10.10.2 ---> 10.10.10.10                #
#     pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}          #
#==============================================================#


all:

  ##################################################################
  #                            CLUSTERS                            #
  ##################################################################
  # meta nodes, nodes, pgsql, redis, pgsql clusters are defined as
  # k:v pair inside `all.children`. Where the key is cluster name
  # and value is cluster definition consist of two parts:
  # `hosts`: cluster members ip and instance level variables
  # `vars` : cluster level variables
  ##################################################################
  children:                                 # groups definition

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    #----------------------------------#
    # pgsql cluster: pg-meta (CMDB)    #
    #----------------------------------#
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary , pg_offline_query: true } }
      vars:
        pg_cluster: pg-meta

        # define business databases here: https://pigsty.io/docs/pgsql/config/db
        pg_databases:                       # define business databases on this cluster, array of database definition
          - name: meta                      # REQUIRED, `name` is the only mandatory field of a database definition
            #state: create                  # optional, create|absent|recreate, create by default
            baseline: cmdb.sql              # optional, database sql baseline path, (relative path among ansible search path, e.g: files/)
            schemas: [pigsty]               # optional, additional schemas to be created, array of schema names
            extensions:                     # optional, additional extensions to be installed: array of `{name[,schema]}`
              - { name: vector }            # install pgvector extension on this database by default
            comment: pigsty meta database   # optional, comment string for this database
            #pgbouncer: true                # optional, add this database to pgbouncer database list? true by default
            #owner: postgres                # optional, database owner, current user if not specified
            #template: template1            # optional, which template to use, template1 by default
            #strategy: FILE_COPY            # optional, clone strategy: FILE_COPY or WAL_LOG (PG15+), default to PG's default
            #encoding: UTF8                 # optional, inherited from template / cluster if not defined (UTF8)
            #locale: C                      # optional, inherited from template / cluster if not defined (C)
            #lc_collate: C                  # optional, inherited from template / cluster if not defined (C)
            #lc_ctype: C                    # optional, inherited from template / cluster if not defined (C)
            #locale_provider: libc          # optional, locale provider: libc, icu, builtin (PG15+)
            #icu_locale: en-US              # optional, icu locale for icu locale provider (PG15+)
            #icu_rules: ''                  # optional, icu rules for icu locale provider (PG16+)
            #builtin_locale: C.UTF-8        # optional, builtin locale for builtin locale provider (PG17+)
            #tablespace: pg_default         # optional, default tablespace, pg_default by default
            #is_template: false             # optional, mark database as template, allowing clone by any user with CREATEDB privilege
            #allowconn: true                # optional, allow connection, true by default. false will disable connect at all
            #revokeconn: false              # optional, revoke public connection privilege. false by default. (leave connect with grant option to owner)
            #register_datasource: true      # optional, register this database to grafana datasources? true by default
            #connlimit: -1                  # optional, database connection limit, default -1 disable limit
            #pool_auth_user: dbuser_meta    # optional, all connection to this pgbouncer database will be authenticated by this user
            #pool_mode: transaction         # optional, pgbouncer pool mode at database level, default transaction
            #pool_size: 64                  # optional, pgbouncer pool size at database level, default 64
            #pool_size_reserve: 32          # optional, pgbouncer pool size reserve at database level, default 32
            #pool_size_min: 0               # optional, pgbouncer pool size min at database level, default 0
            #pool_max_db_conn: 100          # optional, max database connections at database level, default 100
          #- { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
          #- { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          #- { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
          #- { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          #- { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }

        # define business users here: https://pigsty.io/docs/pgsql/config/user
        pg_users:                           # define business users/roles on this cluster, array of user definition
          - name: dbuser_meta               # REQUIRED, `name` is the only mandatory field of a user definition
            password: DBUser.Meta           # optional, password, can be a scram-sha-256 hash string or plain text
            #login: true                     # optional, can log in, true by default  (new biz ROLE should be false)
            #superuser: false                # optional, is superuser? false by default
            #createdb: false                 # optional, can create database? false by default
            #createrole: false               # optional, can create role? false by default
            #inherit: true                   # optional, can this role use inherited privileges? true by default
            #replication: false              # optional, can this role do replication? false by default
            #bypassrls: false                # optional, can this role bypass row level security? false by default
            #pgbouncer: true                 # optional, add this user to pgbouncer user-list? false by default (production user should be true explicitly)
            #connlimit: -1                   # optional, user connection limit, default -1 disable limit
            #expire_in: 3650                 # optional, now + n days when this role is expired (OVERWRITE expire_at)
            #expire_at: '2030-12-31'         # optional, YYYY-MM-DD 'timestamp' when this role is expired  (OVERWRITTEN by expire_in)
            #comment: pigsty admin user      # optional, comment string for this user/role
            #roles: [dbrole_admin]           # optional, belonged roles. default roles are: dbrole_{admin,readonly,readwrite,offline}
            #parameters: {}                  # optional, role level parameters with `ALTER ROLE SET`
            #pool_mode: transaction          # optional, pgbouncer pool mode at user level, transaction by default
            #pool_connlimit: -1              # optional, max database connections at user level, default -1 disable limit
          - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly], comment: read-only viewer for meta database}
          #- {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database   }
          #- {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database  }
          #- {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service      }
          #- {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service    }

        # define business service here: https://pigsty.io/docs/pgsql/service
        pg_services:                        # extra services in addition to pg_default_services, array of service definition
          # standby service will route {ip|name}:5435 to sync replica's pgbouncer (5435->6432 standby)
          - name: standby                   # required, service name, the actual svc name will be prefixed with `pg_cluster`, e.g: pg-meta-standby
            port: 5435                      # required, service exposed port (work as kubernetes service node port mode)
            ip: "*"                         # optional, service bind ip address, `*` for all ip by default
            selector: "[]"                  # required, service member selector, use JMESPath to filter inventory
            dest: default                   # optional, destination port, default|postgres|pgbouncer|<port_number>, 'default' by default
            check: /sync                    # optional, health check url path, / by default
            backup: "[? pg_role == `primary`]"  # backup server selector
            maxconn: 3000                   # optional, max allowed front-end connection
            balance: roundrobin             # optional, haproxy load balance algorithm (roundrobin by default, other: leastconn)
            #options: 'inter 3s fastinter 1s downinter 5s rise 3 fall 3 on-marked-down shutdown-sessions slowstart 30s maxconn 3000 maxqueue 128 weight 100'

        # define pg extensions: https://pigsty.io/docs/pgsql/ext/
        pg_libs: 'pg_stat_statements, auto_explain' # add timescaledb to shared_preload_libraries
        #pg_extensions: [] # extensions to be installed on this cluster

        # define HBA rules here: https://pigsty.io/docs/pgsql/config/hba
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}

        pg_vip_enabled: true
        pg_vip_address: 10.10.10.2/24
        pg_vip_interface: eth1

        pg_crontab:  # make a full backup 1 am everyday
          - '00 01 * * * /pg/bin/pg-backup full'

    #----------------------------------#
    # pgsql cluster: pg-test (3 nodes) #
    #----------------------------------#
    # pg-test --->  10.10.10.3 ---> 10.10.10.1{1,2,3}
    pg-test:                          # define the new 3-node cluster pg-test
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }   # primary instance, leader of cluster
        10.10.10.12: { pg_seq: 2, pg_role: replica }   # replica instance, follower of leader
        10.10.10.13: { pg_seq: 3, pg_role: replica, pg_offline_query: true } # replica with offline access
      vars:
        pg_cluster: pg-test           # define pgsql cluster name
        pg_users:  [{ name: test , password: test , pgbouncer: true , roles: [ dbrole_admin ] }]
        pg_databases: [{ name: test }] # create a database and user named 'test'
        node_tune: tiny
        pg_conf: tiny.yml
        pg_vip_enabled: true
        pg_vip_address: 10.10.10.3/24
        pg_vip_interface: eth1
        pg_crontab:  # make a full backup on monday 1am, and an incremental backup during weekdays
          - '00 01 * * 1 /pg/bin/pg-backup full'
          - '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'

    #----------------------------------#
    # redis ms, sentinel, native cluster
    #----------------------------------#
    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    redis-meta: # redis sentinel x 3
      hosts: { 10.10.10.11: { redis_node: 1 , redis_instances: { 26379: { } ,26380: { } ,26381: { } } } }
      vars:
        redis_cluster: redis-meta
        redis_password: 'redis.meta'
        redis_mode: sentinel
        redis_max_memory: 16MB
        redis_sentinel_monitor: # primary list for redis sentinel, use cls as name, primary ip:port
          - { name: redis-ms, host: 10.10.10.10, port: 6379 ,password: redis.ms, quorum: 2 }

    redis-test: # redis native cluster: 3m x 3s
      hosts:
        10.10.10.12: { redis_node: 1 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
        10.10.10.13: { redis_node: 2 ,redis_instances: { 6379: { } ,6380: { } ,6381: { } } }
      vars: { redis_cluster: redis-test ,redis_password: 'redis.test' ,redis_mode: cluster, redis_max_memory: 32MB }


  ####################################################################
  #                             VARS                                 #
  ####################################################################
  vars:                               # global variables


    #================================================================#
    #                         VARS: INFRA                            #
    #================================================================#

    #-----------------------------------------------------------------
    # META
    #-----------------------------------------------------------------
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default,china,europe
    language: en                      # default language: en, zh
    proxy_env:                        # global proxy env when downloading packages
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn"
      # http_proxy:  # set your proxy here: e.g http://user:[email protected]
      # https_proxy: # set your proxy here: e.g http://user:[email protected]
      # all_proxy:   # set your proxy here: e.g http://user:[email protected]

    #-----------------------------------------------------------------
    # CA
    #-----------------------------------------------------------------
    ca_create: true                   # create ca if not exists? or just abort
    ca_cn: pigsty-ca                  # ca common name, fixed as pigsty-ca
    cert_validity: 7300d              # cert validity, 20 years by default

    #-----------------------------------------------------------------
    # INFRA_IDENTITY
    #-----------------------------------------------------------------
    #infra_seq: 1                     # infra node identity, explicitly required
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name
    infra_data: /data/infra           # default data path for infrastructure data

    #-----------------------------------------------------------------
    # REPO
    #-----------------------------------------------------------------
    repo_enabled: true                # create a yum repo on this infra node?
    repo_home: /www                   # repo home dir, `/www` by default
    repo_name: pigsty                 # repo name, pigsty by default
    repo_endpoint: http://${admin_ip}:80 # access point to this repo by domain or ip:port
    repo_remove: true                 # remove existing upstream repo
    repo_modules: infra,node,pgsql    # which repo modules are installed in repo_upstream
    repo_upstream:                    # where to download
      - { name: pigsty-local   ,description: 'Pigsty Local'       ,module: local   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://${admin_ip}/pigsty ./' }}
      - { name: pigsty-pgsql   ,description: 'Pigsty PgSQL'       ,module: pgsql   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/pgsql/${distro_codename} ${distro_codename} main', china: 'https://repo.pigsty.cc/apt/pgsql/${distro_codename} ${distro_codename} main' }}
      - { name: pigsty-infra   ,description: 'Pigsty Infra'       ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/infra/ generic main' ,china: 'https://repo.pigsty.cc/apt/infra/ generic main' }}
      - { name: nginx          ,description: 'Nginx'              ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://nginx.org/packages/${distro_name} ${distro_codename} nginx' }}
      - { name: docker-ce      ,description: 'Docker'             ,module: infra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable'                               ,china: 'https://mirrors.aliyun.com/docker-ce/linux/${distro_name} ${distro_codename} stable' }}
      - { name: base           ,description: 'Debian Basic'       ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename} main non-free-firmware'                                  ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename} main restricted universe multiverse' }}
      - { name: updates        ,description: 'Debian Updates'     ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://deb.debian.org/debian/ ${distro_codename}-updates main non-free-firmware'                          ,china: 'https://mirrors.aliyun.com/debian/ ${distro_codename}-updates main restricted universe multiverse' }}
      - { name: security       ,description: 'Debian Security'    ,module: node    ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://security.debian.org/debian-security ${distro_codename}-security main non-free-firmware'            ,china: 'https://mirrors.aliyun.com/debian-security/ ${distro_codename}-security main non-free-firmware' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}           main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [x86_64         ] ,baseurl: { default: 'https://mirrors.edge.kernel.org/ubuntu/ ${distro_codename}-updates   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: base           ,description: 'Ubuntu Basic'       ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}             main universe multiverse restricted' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}           main restricted universe multiverse' }}
      - { name: updates        ,description: 'Ubuntu Updates'     ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-backports   main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-updates   main restricted universe multiverse' }}
      - { name: backports      ,description: 'Ubuntu Backports'   ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-security    main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-backports main restricted universe multiverse' }}
      - { name: security       ,description: 'Ubuntu Security'    ,module: node    ,releases: [         20,22,24] ,arch: [        aarch64] ,baseurl: { default: 'http://ports.ubuntu.com/ubuntu-ports/ ${distro_codename}-updates     main restricted universe multiverse' ,china: 'https://mirrors.aliyun.com/ubuntu-ports/ ${distro_codename}-security  main restricted universe multiverse' }}
      - { name: pgdg           ,description: 'PGDG'               ,module: pgsql   ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg main' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg main' }}
      - { name: pgdg-beta      ,description: 'PGDG Beta'          ,module: beta    ,releases: [11,12,13,   22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.postgresql.org/pub/repos/apt/ ${distro_codename}-pgdg-testing main 19' ,china: 'https://mirrors.aliyun.com/postgresql/repos/apt/ ${distro_codename}-pgdg-testing main 19' }}
      - { name: timescaledb    ,description: 'TimescaleDB'        ,module: extra   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/timescale/timescaledb/${distro_name}/ ${distro_codename} main' }}
      - { name: citus          ,description: 'Citus'              ,module: extra   ,releases: [11,12,   20,22   ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packagecloud.io/citusdata/community/${distro_name}/ ${distro_codename} main' } }
      - { name: percona        ,description: 'Percona TDE'        ,module: percona ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/percona ${distro_codename} main' ,china: 'https://repo.pigsty.cc/apt/percona ${distro_codename} main' ,origin: 'http://repo.percona.com/ppg-18.1/apt ${distro_codename} main' }}
      - { name: wiltondb       ,description: 'WiltonDB'           ,module: mssql   ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.pigsty.io/apt/mssql/ ${distro_codename} main'  ,china: 'https://repo.pigsty.cc/apt/mssql/ ${distro_codename} main'  ,origin: 'https://ppa.launchpadcontent.net/wiltondb/wiltondb/ubuntu/ ${distro_codename} main'  }}
      - { name: groonga        ,description: 'Groonga Debian'     ,module: groonga ,releases: [11,12,13         ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.groonga.org/debian/ ${distro_codename} main' }}
      - { name: groonga        ,description: 'Groonga Ubuntu'     ,module: groonga ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/groonga/ppa/ubuntu/ ${distro_codename} main' }}
      - { name: mysql          ,description: 'MySQL'              ,module: mysql   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mysql.com/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools', china: 'https://mirrors.tuna.tsinghua.edu.cn/mysql/apt/${distro_name} ${distro_codename} mysql-8.0 mysql-tools' }}
      - { name: mongo          ,description: 'MongoDB'            ,module: mongo   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://repo.mongodb.org/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse', china: 'https://mirrors.aliyun.com/mongodb/apt/${distro_name} ${distro_codename}/mongodb-org/8.0 multiverse' }}
      - { name: redis          ,description: 'Redis'              ,module: redis   ,releases: [11,12,   20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.redis.io/deb ${distro_codename} main' }}
      - { name: llvm           ,description: 'LLVM'               ,module: llvm    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://apt.llvm.org/${distro_codename}/ llvm-toolchain-${distro_codename} main' ,china: 'https://mirrors.tuna.tsinghua.edu.cn/llvm-apt/${distro_codename}/ llvm-toolchain-${distro_codename} main' }}
      - { name: haproxyd       ,description: 'Haproxy Debian'     ,module: haproxy ,releases: [11,12            ] ,arch: [x86_64, aarch64] ,baseurl: { default: 'http://haproxy.debian.net/ ${distro_codename}-backports-3.1 main' }}
      - { name: haproxyu       ,description: 'Haproxy Ubuntu'     ,module: haproxy ,releases: [         20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://ppa.launchpadcontent.net/vbernat/haproxy-3.1/ubuntu/ ${distro_codename} main' }}
      - { name: grafana        ,description: 'Grafana'            ,module: grafana ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://apt.grafana.com stable main' ,china: 'https://mirrors.aliyun.com/grafana/apt/ stable main' }}
      - { name: kubernetes     ,description: 'Kubernetes'         ,module: kube    ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /', china: 'https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.33/deb/ /' }}
      - { name: gitlab-ee      ,description: 'Gitlab EE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ee/${distro_name}/ ${distro_codename} main' }}
      - { name: gitlab-ce      ,description: 'Gitlab CE'          ,module: gitlab  ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.gitlab.com/gitlab/gitlab-ce/${distro_name}/ ${distro_codename} main' }}
      - { name: clickhouse     ,description: 'ClickHouse'         ,module: click   ,releases: [11,12,13,20,22,24] ,arch: [x86_64, aarch64] ,baseurl: { default: 'https://packages.clickhouse.com/deb/ stable main', china: 'https://mirrors.aliyun.com/clickhouse/deb/ stable main' }}

    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [ pgsql-main ]
    repo_url_packages: []

    #-----------------------------------------------------------------
    # INFRA_PACKAGE
    #-----------------------------------------------------------------
    infra_packages:                   # packages to be installed on infra nodes
      - grafana,grafana-plugins,grafana-victorialogs-ds,grafana-victoriametrics-ds,victoria-metrics,victoria-logs,victoria-traces,vmutils,vlogscli,alertmanager
      - node-exporter,blackbox-exporter,nginx-exporter,pg-exporter,pev2,nginx,dnsmasq,ansible,etcd,python3-requests,redis,mcli,restic,certbot,python3-certbot-nginx

    #-----------------------------------------------------------------
    # NGINX
    #-----------------------------------------------------------------
    nginx_enabled: true               # enable nginx on this infra node?
    nginx_clean: false                # clean existing nginx config during init?
    nginx_exporter_enabled: true      # enable nginx_exporter on this infra node?
    nginx_exporter_port: 9113         # nginx_exporter listen port, 9113 by default
    nginx_sslmode: enable             # nginx ssl mode? disable,enable,enforce
    nginx_cert_validity: 397d         # nginx self-signed cert validity, 397d by default
    nginx_home: /www                  # nginx content dir, `/www` by default (soft link to nginx_data)
    nginx_data: /data/nginx           # nginx actual data dir, /data/nginx by default
    nginx_users: { admin : pigsty }   # nginx basic auth users: name and pass dict
    nginx_port: 80                    # nginx listen port, 80 by default
    nginx_ssl_port: 443               # nginx ssl listen port, 443 by default
    certbot_sign: false               # sign nginx cert with certbot during setup?
    certbot_email: [email protected]     # certbot email address, used for free ssl
    certbot_options: ''               # certbot extra options

    #-----------------------------------------------------------------
    # DNS
    #-----------------------------------------------------------------
    dns_enabled: true                 # setup dnsmasq on this infra node?
    dns_port: 53                      # dns server listen port, 53 by default
    dns_records:                      # dynamic dns records resolved by dnsmasq
      - "${admin_ip} i.pigsty"
      - "${admin_ip} m.pigsty supa.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

    #-----------------------------------------------------------------
    # VICTORIA
    #-----------------------------------------------------------------
    vmetrics_enabled: true            # enable victoria-metrics on this infra node?
    vmetrics_clean: false             # whether clean existing victoria metrics data during init?
    vmetrics_port: 8428               # victoria-metrics listen port, 8428 by default
    vmetrics_scrape_interval: 10s     # victoria global scrape interval, 10s by default
    vmetrics_scrape_timeout: 8s       # victoria global scrape timeout, 8s by default
    vmetrics_options: >-
      -retentionPeriod=15d
      -promscrape.fileSDCheckInterval=5s
    vlogs_enabled: true               # enable victoria-logs on this infra node?
    vlogs_clean: false                # clean victoria-logs data during init?
    vlogs_port: 9428                  # victoria-logs listen port, 9428 by default
    vlogs_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
      -insert.maxLineSizeBytes=1MB
      -search.maxQueryDuration=120s
    vtraces_enabled: true             # enable victoria-traces on this infra node?
    vtraces_clean: false                # clean victoria-trace data during inti?
    vtraces_port: 10428               # victoria-traces listen port, 10428 by default
    vtraces_options: >-
      -retentionPeriod=15d
      -retention.maxDiskSpaceUsageBytes=50GiB
    vmalert_enabled: true             # enable vmalert on this infra node?
    vmalert_port: 8880                # vmalert listen port, 8880 by default
    vmalert_options: ''              # vmalert extra server options

    #-----------------------------------------------------------------
    # PROMETHEUS
    #-----------------------------------------------------------------
    blackbox_enabled: true            # setup blackbox_exporter on this infra node?
    blackbox_port: 9115               # blackbox_exporter listen port, 9115 by default
    blackbox_options: ''              # blackbox_exporter extra server options
    alertmanager_enabled: true        # setup alertmanager on this infra node?
    alertmanager_port: 9059           # alertmanager listen port, 9059 by default
    alertmanager_options: ''          # alertmanager extra server options
    exporter_metrics_path: /metrics   # exporter metric path, `/metrics` by default

    #-----------------------------------------------------------------
    # GRAFANA
    #-----------------------------------------------------------------
    grafana_enabled: true             # enable grafana on this infra node?
    grafana_port: 3000                # default listen port for grafana
    grafana_clean: false              # clean grafana data during init?
    grafana_admin_username: admin     # grafana admin username, `admin` by default
    grafana_admin_password: pigsty    # grafana admin password, `pigsty` by default
    grafana_auth_proxy: false         # enable grafana auth proxy?
    grafana_pgurl: ''                 # external postgres database url for grafana if given
    grafana_view_password: DBUser.Viewer # password for grafana meta pg datasource


    #================================================================#
    #                         VARS: NODE                             #
    #================================================================#

    #-----------------------------------------------------------------
    # NODE_IDENTITY
    #-----------------------------------------------------------------
    #nodename:           # [INSTANCE] # node instance identity, use hostname if missing, optional
    node_cluster: nodes   # [CLUSTER] # node cluster identity, use 'nodes' if missing, optional
    nodename_overwrite: true          # overwrite node's hostname with nodename?
    nodename_exchange: false          # exchange nodename among play hosts?
    node_id_from_pg: true             # use postgres identity as node identity if applicable?

    #-----------------------------------------------------------------
    # NODE_DNS
    #-----------------------------------------------------------------
    node_write_etc_hosts: true        # modify `/etc/hosts` on target node?
    node_default_etc_hosts:           # static dns records in `/etc/hosts`
      - "${admin_ip} i.pigsty"
    node_etc_hosts: []                # extra static dns records in `/etc/hosts`
    node_dns_method: add              # how to handle dns servers: add,none,overwrite
    node_dns_servers: ['${admin_ip}'] # dynamic nameserver in `/etc/resolv.conf`
    node_dns_options:                 # dns resolv options in `/etc/resolv.conf`
      - options single-request-reopen timeout:1

    #-----------------------------------------------------------------
    # NODE_PACKAGE
    #-----------------------------------------------------------------
    node_repo_modules: local          # upstream repo to be added on node, local by default
    node_repo_remove: true            # remove existing repo on node?
    node_packages: [openssh-server]   # packages to be installed current nodes with latest version
    node_default_packages:            # default packages to be installed on all nodes
      - lz4,unzip,bzip2,pv,jq,git,ncdu,make,patch,bash,lsof,wget,uuid,tuned,nvme-cli,numactl,sysstat,iotop,htop,rsync,tcpdump
      - python3,python3-pip,socat,lrzsz,net-tools,ipvsadm,telnet,ca-certificates,openssl,keepalived,etcd,haproxy,chrony,pig
      - zlib1g,acl,dnsutils,libreadline-dev,vim-tiny,node-exporter,openssh-server,openssh-client,vector
    node_uv_env: /data/venv           # uv venv path, empty string to skip
    node_pip_packages: ''             # pip packages to install in uv venv

    #-----------------------------------------------------------------
    # NODE_SEC
    #-----------------------------------------------------------------
    node_selinux_mode: permissive     # set selinux mode: enforcing,permissive,disabled
    node_firewall_mode: zone          # firewall mode: none (skip), off (disable), zone (enable & config)
    node_firewall_intranet:           # which intranet cidr considered as internal network
      - 10.0.0.0/8
      - 192.168.0.0/16
      - 172.16.0.0/12
    node_firewall_public_port:        # expose these ports to public network in (zone, strict) mode
      - 22                            # enable ssh access
      - 80                            # enable http access
      - 443                           # enable https access
      - 5432                          # enable postgresql access (think twice before exposing it!)

    #-----------------------------------------------------------------
    # NODE_TUNE
    #-----------------------------------------------------------------
    node_disable_numa: false          # disable node numa, reboot required
    node_disable_swap: false          # disable node swap, use with caution
    node_static_network: true         # preserve dns resolver settings after reboot
    node_disk_prefetch: false         # setup disk prefetch on HDD to increase performance
    node_kernel_modules: [ softdog, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh ]
    node_hugepage_count: 0            # number of 2MB hugepage, take precedence over ratio
    node_hugepage_ratio: 0            # node mem hugepage ratio, 0 disable it by default
    node_overcommit_ratio: 0          # node mem overcommit ratio, 0 disable it by default
    node_tune: oltp                   # node tuned profile: none,oltp,olap,crit,tiny
    node_sysctl_params: { }           # sysctl parameters in k:v format in addition to tuned

    #-----------------------------------------------------------------
    # NODE_ADMIN
    #-----------------------------------------------------------------
    node_data: /data                  # node main data directory, `/data` by default
    node_admin_enabled: true          # create a admin user on target node?
    node_admin_uid: 88                # uid and gid for node admin user
    node_admin_username: dba          # name of node admin user, `dba` by default
    node_admin_sudo: nopass           # admin sudo privilege, all,nopass. nopass by default
    node_admin_ssh_exchange: true     # exchange admin ssh key among node cluster
    node_admin_pk_current: true       # add current user's ssh pk to admin authorized_keys
    node_admin_pk_list: []            # ssh public keys to be added to admin user
    node_aliases: {}                  # extra shell aliases to be added, k:v dict

    #-----------------------------------------------------------------
    # NODE_TIME
    #-----------------------------------------------------------------
    node_timezone: ''                 # setup node timezone, empty string to skip
    node_ntp_enabled: true            # enable chronyd time sync service?
    node_ntp_servers:                 # ntp servers in `/etc/chrony.conf`
      - pool pool.ntp.org iburst
    node_crontab_overwrite: true      # overwrite or append to `/etc/crontab`?
    node_crontab: [ ]                 # crontab entries in `/etc/crontab`

    #-----------------------------------------------------------------
    # NODE_VIP
    #-----------------------------------------------------------------
    vip_enabled: false                # enable vip on this node cluster?
    # vip_address:         [IDENTITY] # node vip address in ipv4 format, required if vip is enabled
    # vip_vrid:            [IDENTITY] # required, integer, 1-254, should be unique among same VLAN
    vip_role: backup                  # optional, `master|backup`, backup by default, use as init role
    vip_preempt: false                # optional, `true/false`, false by default, enable vip preemption
    vip_interface: eth0               # node vip network interface to listen, `eth0` by default
    vip_dns_suffix: ''                # node vip dns name suffix, empty string by default
    vip_exporter_port: 9650           # keepalived exporter listen port, 9650 by default

    #-----------------------------------------------------------------
    # HAPROXY
    #-----------------------------------------------------------------
    haproxy_enabled: true             # enable haproxy on this node?
    haproxy_clean: false              # cleanup all existing haproxy config?
    haproxy_reload: true              # reload haproxy after config?
    haproxy_auth_enabled: true        # enable authentication for haproxy admin page
    haproxy_admin_username: admin     # haproxy admin username, `admin` by default
    haproxy_admin_password: pigsty    # haproxy admin password, `pigsty` by default
    haproxy_exporter_port: 9101       # haproxy admin/exporter port, 9101 by default
    haproxy_client_timeout: 24h       # client side connection timeout, 24h by default
    haproxy_server_timeout: 24h       # server side connection timeout, 24h by default
    haproxy_services: []              # list of haproxy service to be exposed on node

    #-----------------------------------------------------------------
    # NODE_EXPORTER
    #-----------------------------------------------------------------
    node_exporter_enabled: true       # setup node_exporter on this node?
    node_exporter_port: 9100          # node exporter listen port, 9100 by default
    node_exporter_options: '--no-collector.softnet --no-collector.nvme --collector.tcpstat --collector.processes'

    #-----------------------------------------------------------------
    # VECTOR
    #-----------------------------------------------------------------
    vector_enabled: true              # enable vector log collector?
    vector_clean: false               # purge vector data dir during init?
    vector_data: /data/vector         # vector data dir, /data/vector by default
    vector_port: 9598                 # vector metrics port, 9598 by default
    vector_read_from: beginning       # vector read from beginning or end
    vector_log_endpoint: [ infra ]    # if defined, sending vector log to this endpoint.


    #================================================================#
    #                        VARS: DOCKER                            #
    #================================================================#
    docker_enabled: false             # enable docker on this node?
    docker_data: /data/docker         # docker data directory, /data/docker by default
    docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
    docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
    docker_registry_mirrors: []       # docker registry mirror list
    docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
    docker_image: []                  # docker image to be pulled after bootstrap
    docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

    #================================================================#
    #                         VARS: ETCD                             #
    #================================================================#
    #etcd_seq: 1                      # etcd instance identifier, explicitly required
    etcd_cluster: etcd                # etcd cluster & group name, etcd by default
    etcd_safeguard: false             # prevent purging running etcd instance?
    etcd_clean: true                  # purging existing etcd during initialization?
    etcd_data: /data/etcd             # etcd data directory, /data/etcd by default
    etcd_port: 2379                   # etcd client port, 2379 by default
    etcd_peer_port: 2380              # etcd peer port, 2380 by default
    etcd_init: new                    # etcd initial cluster state, new or existing
    etcd_election_timeout: 1000       # etcd election timeout, 1000ms by default
    etcd_heartbeat_interval: 100      # etcd heartbeat interval, 100ms by default
    etcd_root_password: Etcd.Root     # etcd root password for RBAC, change it!


    #================================================================#
    #                         VARS: MINIO                            #
    #================================================================#
    #minio_seq: 1                     # minio instance identifier, REQUIRED
    minio_cluster: minio              # minio cluster identifier, REQUIRED
    minio_clean: false                # cleanup minio during init?, false by default
    minio_user: minio                 # minio os user, `minio` by default
    minio_https: true                 # use https for minio, true by default
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
    #minio_volumes:                   # minio data volumes, override defaults if specified
    minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
    minio_port: 9000                  # minio service port, 9000 by default
    minio_admin_port: 9001            # minio console port, 9001 by default
    minio_access_key: minioadmin      # root access key, `minioadmin` by default
    minio_secret_key: S3User.MinIO    # root secret key, `S3User.MinIO` by default
    minio_extra_vars: ''              # extra environment variables
    minio_provision: true             # run minio provisioning tasks?
    minio_alias: sss                  # alias name for local minio deployment
    #minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
    minio_buckets:                    # list of minio bucket to be created
      - { name: pgsql }
      - { name: meta ,versioning: true }
      - { name: data }
    minio_users:                      # list of minio user to be created
      - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
      - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
      - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }


    #================================================================#
    #                         VARS: REDIS                            #
    #================================================================#
    #redis_cluster:        <CLUSTER> # redis cluster name, required identity parameter
    #redis_node: 1            <NODE> # redis node sequence number, node int id required
    #redis_instances: {}      <NODE> # redis instances definition on this redis node
    redis_fs_main: /data              # redis main data mountpoint, `/data` by default
    redis_exporter_enabled: true      # install redis exporter on redis nodes?
    redis_exporter_port: 9121         # redis exporter listen port, 9121 by default
    redis_exporter_options: ''        # cli args and extra options for redis exporter
    redis_mode: standalone            # redis mode: standalone,cluster,sentinel
    redis_conf: redis.conf            # redis config template path, except sentinel
    redis_bind_address: '0.0.0.0'     # redis bind address, empty string will use host ip
    redis_max_memory: 1GB             # max memory used by each redis instance
    redis_mem_policy: allkeys-lru     # redis memory eviction policy
    redis_password: ''                # redis password, empty string will disable password
    redis_rdb_save: ['1200 1']        # redis rdb save directives, disable with empty list
    redis_aof_enabled: false          # enable redis append only file?
    redis_rename_commands: {}         # rename redis dangerous commands
    redis_cluster_replicas: 1         # replica number for one master in redis cluster
    redis_sentinel_monitor: []        # sentinel master list, works on sentinel cluster only


    #================================================================#
    #                         VARS: PGSQL                            #
    #================================================================#

    #-----------------------------------------------------------------
    # PG_IDENTITY
    #-----------------------------------------------------------------
    pg_mode: pgsql          #CLUSTER  # pgsql cluster mode: pgsql,citus,gpsql,mssql,mysql,ivory,polar
    # pg_cluster:           #CLUSTER  # pgsql cluster name, required identity parameter
    # pg_seq: 0             #INSTANCE # pgsql instance seq number, required identity parameter
    # pg_role: replica      #INSTANCE # pgsql role, required, could be primary,replica,offline
    # pg_instances: {}      #INSTANCE # define multiple pg instances on node in `{port:ins_vars}` format
    # pg_upstream:          #INSTANCE # repl upstream ip addr for standby cluster or cascade replica
    # pg_shard:             #CLUSTER  # pgsql shard name, optional identity for sharding clusters
    # pg_group: 0           #CLUSTER  # pgsql shard index number, optional identity for sharding clusters
    # gp_role: master       #CLUSTER  # greenplum role of this cluster, could be master or segment
    pg_offline_query: false #INSTANCE # set to true to enable offline queries on this instance

    #-----------------------------------------------------------------
    # PG_BUSINESS
    #-----------------------------------------------------------------
    # postgres business object definition, overwrite in group vars
    pg_users: []                      # postgres business users
    pg_databases: []                  # postgres business databases
    pg_services: []                   # postgres business services
    pg_hba_rules: []                  # business hba rules for postgres
    pgb_hba_rules: []                 # business hba rules for pgbouncer
    # global credentials, overwrite in global vars
    pg_dbsu_password: ''              # dbsu password, empty string means no dbsu password by default
    pg_replication_username: replicator
    pg_replication_password: DBUser.Replicator
    pg_admin_username: dbuser_dba
    pg_admin_password: DBUser.DBA
    pg_monitor_username: dbuser_monitor
    pg_monitor_password: DBUser.Monitor

    #-----------------------------------------------------------------
    # PG_INSTALL
    #-----------------------------------------------------------------
    pg_dbsu: postgres                 # os dbsu name, postgres by default, better not change it
    pg_dbsu_uid: 543                  # os dbsu uid and gid, 26 for default postgres users and groups
    pg_dbsu_sudo: limit               # dbsu sudo privilege, none,limit,all,nopass. limit by default
    pg_dbsu_home: /var/lib/pgsql      # postgresql home directory, `/var/lib/pgsql` by default
    pg_dbsu_ssh_exchange: true        # exchange postgres dbsu ssh key among same pgsql cluster
    pg_version: 18                    # postgres major version to be installed, 18 by default
    pg_bin_dir: /usr/pgsql/bin        # postgres binary dir, `/usr/pgsql/bin` by default
    pg_log_dir: /pg/log/postgres      # postgres log dir, `/pg/log/postgres` by default
    pg_packages:                      # pg packages to be installed, alias can be used
      - pgsql-main pgsql-common
    pg_extensions: []                 # pg extensions to be installed, alias can be used

    #-----------------------------------------------------------------
    # PG_BOOTSTRAP
    #-----------------------------------------------------------------
    pg_data: /pg/data                 # postgres data directory, `/pg/data` by default
    pg_fs_main: /data/postgres        # postgres main data directory, `/data/postgres` by default
    pg_fs_backup: /data/backups       # postgres backup data directory, `/data/backups` by default
    pg_storage_type: SSD              # storage type for pg main data, SSD,HDD, SSD by default
    pg_dummy_filesize: 64MiB          # size of `/pg/dummy`, hold 64MB disk space for emergency use
    pg_listen: '0.0.0.0'              # postgres/pgbouncer listen addresses, comma separated list
    pg_port: 5432                     # postgres listen port, 5432 by default
    pg_localhost: /var/run/postgresql # postgres unix socket dir for localhost connection
    patroni_enabled: true             # if disabled, no postgres cluster will be created during init
    patroni_mode: default             # patroni working mode: default,pause,remove
    pg_namespace: /pg                 # top level key namespace in etcd, used by patroni & vip
    patroni_port: 8008                # patroni listen port, 8008 by default
    patroni_log_dir: /pg/log/patroni  # patroni log dir, `/pg/log/patroni` by default
    patroni_ssl_enabled: false        # secure patroni RestAPI communications with SSL?
    patroni_watchdog_mode: off        # patroni watchdog mode: automatic,required,off. off by default
    patroni_username: postgres        # patroni restapi username, `postgres` by default
    patroni_password: Patroni.API     # patroni restapi password, `Patroni.API` by default
    pg_etcd_password: ''              # etcd password for this pg cluster, '' to use pg_cluster
    pg_primary_db: postgres           # primary database name, used by citus,etc... ,postgres by default
    pg_parameters: {}                 # extra parameters in postgresql.auto.conf
    pg_files: []                      # extra files to be copied to postgres data directory (e.g. license)
    pg_conf: oltp.yml                 # config template: oltp,olap,crit,tiny. `oltp.yml` by default
    pg_max_conn: auto                 # postgres max connections, `auto` will use recommended value
    pg_shared_buffer_ratio: 0.25      # postgres shared buffers ratio, 0.25 by default, 0.1~0.4
    pg_io_method: worker              # io method for postgres, auto,fsync,worker,io_uring, worker by default
    pg_rto: norm                      # shared rto mode for patroni & haproxy: fast,norm,safe,wide
    pg_rpo: 1048576                   # recovery point objective in bytes, `1MiB` at most by default
    pg_libs: 'pg_stat_statements, auto_explain'  # preloaded libraries, `pg_stat_statements,auto_explain` by default
    pg_delay: 0                       # replication apply delay for standby cluster leader
    pg_checksum: true                 # enable data checksum for postgres cluster?
    pg_encoding: UTF8                 # database cluster encoding, `UTF8` by default
    pg_locale: C                      # database cluster local, `C` by default
    pg_lc_collate: C                  # database cluster collate, `C` by default
    pg_lc_ctype: C                    # database character type, `C` by default
    #pgsodium_key: ""                 # pgsodium key, 64 hex digit, default to sha256(pg_cluster)
    #pgsodium_getkey_script: ""       # pgsodium getkey script path, pgsodium_getkey by default

    #-----------------------------------------------------------------
    # PG_PROVISION
    #-----------------------------------------------------------------
    pg_provision: true                # provision postgres cluster after bootstrap
    pg_init: pg-init                  # provision init script for cluster template, `pg-init` by default
    pg_default_roles:                 # default roles and users in postgres cluster
      - { name: dbrole_readonly  ,login: false ,comment: role for global read-only access     }
      - { name: dbrole_offline   ,login: false ,comment: role for restricted read-only access }
      - { name: dbrole_readwrite ,login: false ,roles: [dbrole_readonly] ,comment: role for global read-write access }
      - { name: dbrole_admin     ,login: false ,roles: [pg_monitor, dbrole_readwrite] ,comment: role for object creation }
      - { name: postgres     ,superuser: true  ,comment: system superuser }
      - { name: replicator ,replication: true  ,roles: [pg_monitor, dbrole_readonly] ,comment: system replicator }
      - { name: dbuser_dba   ,superuser: true  ,roles: [dbrole_admin]  ,pgbouncer: true ,pool_mode: session, pool_connlimit: 16 ,comment: pgsql admin user }
      - { name: dbuser_monitor ,roles: [pg_monitor] ,pgbouncer: true ,parameters: {log_min_duration_statement: 1000 } ,pool_mode: session ,pool_connlimit: 8 ,comment: pgsql monitor user }
    pg_default_privileges:            # default privileges when created by admin user
      - GRANT USAGE      ON SCHEMAS   TO dbrole_readonly
      - GRANT SELECT     ON TABLES    TO dbrole_readonly
      - GRANT SELECT     ON SEQUENCES TO dbrole_readonly
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_readonly
      - GRANT USAGE      ON SCHEMAS   TO dbrole_offline
      - GRANT SELECT     ON TABLES    TO dbrole_offline
      - GRANT SELECT     ON SEQUENCES TO dbrole_offline
      - GRANT EXECUTE    ON FUNCTIONS TO dbrole_offline
      - GRANT INSERT     ON TABLES    TO dbrole_readwrite
      - GRANT UPDATE     ON TABLES    TO dbrole_readwrite
      - GRANT DELETE     ON TABLES    TO dbrole_readwrite
      - GRANT USAGE      ON SEQUENCES TO dbrole_readwrite
      - GRANT UPDATE     ON SEQUENCES TO dbrole_readwrite
      - GRANT TRUNCATE   ON TABLES    TO dbrole_admin
      - GRANT REFERENCES ON TABLES    TO dbrole_admin
      - GRANT TRIGGER    ON TABLES    TO dbrole_admin
      - GRANT CREATE     ON SCHEMAS   TO dbrole_admin
    pg_default_schemas: [ monitor ]   # default schemas to be created
    pg_default_extensions:            # default extensions to be created
      - { name: pg_stat_statements ,schema: monitor }
      - { name: pgstattuple        ,schema: monitor }
      - { name: pg_buffercache     ,schema: monitor }
      - { name: pageinspect        ,schema: monitor }
      - { name: pg_prewarm         ,schema: monitor }
      - { name: pg_visibility      ,schema: monitor }
      - { name: pg_freespacemap    ,schema: monitor }
      - { name: postgres_fdw       ,schema: public  }
      - { name: file_fdw           ,schema: public  }
      - { name: btree_gist         ,schema: public  }
      - { name: btree_gin          ,schema: public  }
      - { name: pg_trgm            ,schema: public  }
      - { name: intagg             ,schema: public  }
      - { name: intarray           ,schema: public  }
      - { name: pg_repack }
    pg_reload: true                   # reload postgres after hba changes
    pg_default_hba_rules:             # postgres default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
      - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
      - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
      - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
      - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
      - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
      - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
      - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
      - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
      - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
      - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}
    pgb_default_hba_rules:            # pgbouncer default host-based authentication rules, order by `order`
      - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
      - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
      - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
      - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
      - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
      - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
      - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

    #-----------------------------------------------------------------
    # PG_BACKUP
    #-----------------------------------------------------------------
    pgbackrest_enabled: true          # enable pgbackrest on pgsql host?
    pgbackrest_log_dir: /pg/log/pgbackrest # pgbackrest log dir, `/pg/log/pgbackrest` by default
    pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
    pgbackrest_init_backup: true      # take a full backup after pgbackrest is initialized?
    pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
      local:                          # default pgbackrest repo with local posix fs
        path: /pg/backup              # local backup directory, `/pg/backup` by default
        retention_full_type: count    # retention full backups by count
        retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
      minio:                          # optional minio repo for pgbackrest
        type: s3                      # minio is s3-compatible, so s3 is used
        s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
        s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
        s3_bucket: pgsql              # minio bucket name, `pgsql` by default
        s3_key: pgbackrest            # minio user access key for pgbackrest
        s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
        s3_uri_style: path            # use path style uri for minio rather than host style
        path: /pgbackrest             # minio backup path, default is `/pgbackrest`
        storage_port: 9000            # minio port, 9000 by default
        storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
        block: y                      # Enable block incremental backup
        bundle: y                     # bundle small files into a single file
        bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
        bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
        cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
        cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
        retention_full_type: time     # retention full backup by time on minio repo
        retention_full: 14            # keep full backup for the the last 14 days

    #-----------------------------------------------------------------
    # PG_ACCESS
    #-----------------------------------------------------------------
    pgbouncer_enabled: true           # if disabled, pgbouncer will not be launched on pgsql host
    pgbouncer_port: 6432              # pgbouncer listen port, 6432 by default
    pgbouncer_log_dir: /pg/log/pgbouncer  # pgbouncer log dir, `/pg/log/pgbouncer` by default
    pgbouncer_auth_query: false       # query postgres to retrieve unlisted business users?
    pgbouncer_poolmode: transaction   # pooling mode: transaction,session,statement, transaction by default
    pgbouncer_sslmode: disable        # pgbouncer client ssl mode, disable by default
    pgbouncer_ignore_param: [ extra_float_digits, application_name, TimeZone, DateStyle, IntervalStyle, search_path ]
    pg_weight: 100          #INSTANCE # relative load balance weight in service, 100 by default, 0-255
    pg_service_provider: ''           # dedicate haproxy node group name, or empty string for local nodes by default
    pg_default_service_dest: pgbouncer # default service destination if svc.dest='default'
    pg_default_services:              # postgres default service definitions
      - { name: primary ,port: 5433 ,dest: default  ,check: /primary   ,selector: "[]" }
      - { name: replica ,port: 5434 ,dest: default  ,check: /read-only ,selector: "[]" , backup: "[? pg_role == `primary` || pg_role == `offline` ]" }
      - { name: default ,port: 5436 ,dest: postgres ,check: /primary   ,selector: "[]" }
      - { name: offline ,port: 5438 ,dest: postgres ,check: /replica   ,selector: "[? pg_role == `offline` || pg_offline_query ]" , backup: "[? pg_role == `replica` && !pg_offline_query]"}
    pg_vip_enabled: false             # enable a l2 vip for pgsql primary? false by default
    pg_vip_address: 127.0.0.1/24      # vip address in `<ipv4>/<mask>` format, require if vip is enabled
    pg_vip_interface: eth0            # vip network interface to listen, eth0 by default
    pg_dns_suffix: ''                 # pgsql dns suffix, '' by default
    pg_dns_target: auto               # auto, primary, vip, none, or ad hoc ip

    #-----------------------------------------------------------------
    # PG_MONITOR
    #-----------------------------------------------------------------
    pg_exporter_enabled: true              # enable pg_exporter on pgsql hosts?
    pg_exporter_config: pg_exporter.yml    # pg_exporter configuration file name
    pg_exporter_cache_ttls: '1,10,60,300'  # pg_exporter collector ttl stage in seconds, '1,10,60,300' by default
    pg_exporter_port: 9630                 # pg_exporter listen port, 9630 by default
    pg_exporter_params: 'sslmode=disable'  # extra url parameters for pg_exporter dsn
    pg_exporter_url: ''                    # overwrite auto-generate pg dsn if specified
    pg_exporter_auto_discovery: true       # enable auto database discovery? enabled by default
    pg_exporter_exclude_database: 'template0,template1,postgres' # csv of database that WILL NOT be monitored during auto-discovery
    pg_exporter_include_database: ''       # csv of database that WILL BE monitored during auto-discovery
    pg_exporter_connect_timeout: 200       # pg_exporter connect timeout in ms, 200 by default
    pg_exporter_options: ''                # overwrite extra options for pg_exporter
    pgbouncer_exporter_enabled: true       # enable pgbouncer_exporter on pgsql hosts?
    pgbouncer_exporter_port: 9631          # pgbouncer_exporter listen port, 9631 by default
    pgbouncer_exporter_url: ''             # overwrite auto-generate pgbouncer dsn if specified
    pgbouncer_exporter_options: ''         # overwrite extra options for pgbouncer_exporter
    pgbackrest_exporter_enabled: true      # enable pgbackrest_exporter on pgsql hosts?
    pgbackrest_exporter_port: 9854         # pgbackrest_exporter listen port, 9854 by default
    pgbackrest_exporter_options: >
      --collect.interval=120
      --log.level=info

    #-----------------------------------------------------------------
    # PG_REMOVE
    #-----------------------------------------------------------------
    pg_safeguard: false               # stop pg_remove running if pg_safeguard is enabled, false by default
    pg_rm_data: true                  # remove postgres data during remove? true by default
    pg_rm_backup: true                # remove pgbackrest backup during primary remove? true by default
    pg_rm_pkg: true                   # uninstall postgres packages during remove? true by default

...

Explanation

The demo/debian template is optimized for Debian and Ubuntu distributions.

Supported Distributions:

  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 LTS (Jammy)
  • Ubuntu 24.04 LTS (Noble)

Key Features:

  • Uses PGDG APT repositories
  • Optimized for APT package manager
  • Supports Debian/Ubuntu-specific package names

Use Cases:

  • Cloud servers (Ubuntu widely used)
  • Container environments (Debian commonly used as base image)
  • Development and testing environments

8.35 - demo/demo

Pigsty public demo site configuration, showcasing SSL certificates, domain exposure, and full extension installation

The demo/demo configuration template is used by Pigsty’s public demo site, demonstrating how to expose services publicly, configure SSL certificates, and install all available extensions.

If you want to set up your own public service on a cloud server, you can use this template as a reference.


Overview

  • Config Name: demo/demo
  • Node Count: Single node
  • Description: Pigsty public demo site configuration
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64
  • Related: meta, rich

Usage:

./configure -c demo/demo [-i <primary_ip>]

Key Features

This template enhances the meta template with:

  • SSL certificate and custom domain configuration (e.g., pigsty.cc)
  • Downloads and installs all available PostgreSQL 18 extensions
  • Enables Docker with image acceleration
  • Deploys MinIO object storage
  • Pre-configures multiple business databases and users
  • Adds Redis primary-replica instance examples
  • Adds FerretDB MongoDB-compatible cluster
  • Adds Kafka sample cluster

Content

Source: pigsty/conf/demo/demo.yml

---
#==============================================================#
# File      :   demo.yml
# Desc      :   Pigsty Public Demo Configuration
# Ctime     :   2020-05-22
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra:
      hosts: { 10.10.10.10: { infra_seq: 1 } }
      vars:
        nodename: pigsty.cc       # overwrite the default hostname
        node_id_from_pg: false    # do not use the pg identity as hostname
        docker_enabled: true      # enable docker on this node
        docker_registry_mirrors: ["https://mirror.ccs.tencentyun.com", "https://docker.1ms.run"]
        # ./pgsql-monitor.yml -l infra     # monitor 'external' PostgreSQL instance
        pg_exporters:             # treat local postgres as RDS for demonstration purpose
          20001: { pg_cluster: pg-foo, pg_seq: 1, pg_host: 10.10.10.10 }
          #20002: { pg_cluster: pg-bar, pg_seq: 1, pg_host: 10.10.10.11 , pg_port: 5432 }
          #20003: { pg_cluster: pg-bar, pg_seq: 2, pg_host: 10.10.10.12 , pg_exporter_url: 'postgres://dbuser_monitor:[email protected]:5432/postgres?sslmode=disable' }
          #20004: { pg_cluster: pg-bar, pg_seq: 3, pg_host: 10.10.10.13 , pg_monitor_username: dbuser_monitor, pg_monitor_password: DBUser.Monitor }

    # etcd cluster for ha postgres
    etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }

    # minio cluster, s3 compatible object storage
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

    # postgres example cluster: pg-meta
    pg-meta:
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars:
        pg_cluster: pg-meta
        pg_users:
          - {name: dbuser_meta       ,password: DBUser.Meta       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
          - {name: dbuser_view       ,password: DBUser.Viewer     ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
          - {name: dbuser_grafana    ,password: DBUser.Grafana    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
          - {name: dbuser_bytebase   ,password: DBUser.Bytebase   ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
          - {name: dbuser_kong       ,password: DBUser.Kong       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
          - {name: dbuser_gitea      ,password: DBUser.Gitea      ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
          - {name: dbuser_wiki       ,password: DBUser.Wiki       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
          - {name: dbuser_noco       ,password: DBUser.Noco       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
          - {name: dbuser_odoo       ,password: DBUser.Odoo       ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for odoo service ,createdb: true } #,superuser: true}
          - {name: dbuser_mattermost ,password: DBUser.MatterMost ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for mattermost ,createdb: true }
        pg_databases:
          - {name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: vector},{name: postgis},{name: timescaledb}]}
          - {name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database  }
          - {name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
          - {name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong api gateway database }
          - {name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
          - {name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database  }
          - {name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database     }
          #- {name: odoo     ,owner: dbuser_odoo     ,revokeconn: true ,comment: odoo main database  }
          - {name: mattermost ,owner: dbuser_mattermost ,revokeconn: true ,comment: mattermost main database }
        pg_hba_rules:
          - {user: dbuser_view , db: all ,addr: infra ,auth: pwd ,title: 'allow grafana dashboard access cmdb from infra nodes'}
        pg_libs: 'timescaledb,pg_stat_statements, auto_explain'  # add timescaledb to shared_preload_libraries
        pg_extensions: # extensions to be installed on this cluster
          - timescaledb timescaledb_toolkit pg_timeseries periods temporal_tables emaj table_version pg_cron pg_task pg_later pg_background
          - postgis pgrouting pointcloud pg_h3 q3c ogr_fdw geoip pg_polyline pg_geohash #mobilitydb
          - pgvector vchord pgvectorscale pg_vectorize pg_similarity smlar pg_summarize pg_tiktoken pg4ml #pgml
          - pg_search pgroonga pg_bigm zhparser pg_bestmatch vchord_bm25 hunspell
          - citus hydra pg_analytics pg_duckdb pg_mooncake duckdb_fdw pg_parquet pg_fkpart pg_partman plproxy #pg_strom
          - age hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan hypopg index_advisor pg_plan_filter imgsmlr pg_ivm pg_incremental pgmq pgq pg_cardano omnigres #rdkit
          - pg_tle plv8 pllua plprql pldebugger plpgsql_check plprofiler plsh pljava #plr #pgtap #faker #dbt2
          - pg_prefix pg_semver pgunit pgpdf pglite_fusion md5hash asn1oid roaringbitmap pgfaceting pgsphere pg_country pg_xenophile pg_currency pg_collection pgmp numeral pg_rational pguint pg_uint128 hashtypes ip4r pg_uri pgemailaddr pg_acl timestamp9 chkpass #pg_duration #debversion #pg_rrule
          - pg_gzip pg_bzip pg_zstd pg_http pg_net pg_curl pgjq pgjwt pg_smtp_client pg_html5_email_address url_encode pgsql_tweaks pg_extra_time pgpcre icu_ext pgqr pg_protobuf envvar floatfile pg_readme ddl_historization data_historization pg_schedoc pg_hashlib pg_xxhash shacrypt cryptint pg_ecdsa pgsparql
          - pg_idkit pg_uuidv7 permuteseq pg_hashids sequential_uuids topn quantile lower_quantile count_distinct omnisketch ddsketch vasco pgxicor tdigest first_last_agg extra_window_functions floatvec aggs_for_vecs aggs_for_arrays pg_arraymath pg_math pg_random pg_base36 pg_base62 pg_base58 pg_financial
          - pg_repack pg_squeeze pg_dirtyread pgfincore pg_cooldown pg_ddlx pg_prioritize pg_checksums pg_readonly pg_upless pg_permissions pgautofailover pg_catcheck preprepare pgcozy pg_orphaned pg_crash pg_cheat_funcs pg_fio pg_savior safeupdate pg_drop_events table_log #pgagent #pgpool
          - pg_profile pg_tracing pg_show_plans pg_stat_kcache pg_stat_monitor pg_qualstats pg_store_plans pg_track_settings pg_wait_sampling system_stats pg_meta pgnodemx pg_sqlog bgw_replstatus pgmeminfo toastinfo pg_explain_ui pg_relusage pagevis powa
          - passwordcheck supautils pgsodium pg_vault pg_session_jwt pg_anon pg_tde pgsmcrypto pgaudit pgauditlogtofile pg_auth_mon credcheck pgcryptokey pg_jobmon logerrors login_hook set_user pg_snakeoil pgextwlist pg_auditor sslutils pg_noset
          - wrappers multicorn odbc_fdw jdbc_fdw mysql_fdw tds_fdw sqlite_fdw pgbouncer_fdw mongo_fdw redis_fdw pg_redis_pubsub kafka_fdw hdfs_fdw firebird_fdw aws_s3 log_fdw #oracle_fdw #db2_fdw
          - documentdb orafce pgtt session_variable pg_statement_rollback pg_dbms_metadata pg_dbms_lock pgmemcache #pg_dbms_job #wiltondb
          - pglogical pglogical_ticker pgl_ddl_deploy pg_failover_slots db_migrator wal2json wal2mongo decoderbufs decoder_raw mimeo pg_fact_loader pg_bulkload #repmgr

    redis-ms: # redis classic primary & replica
      hosts: { 10.10.10.10: { redis_node: 1 , redis_instances: { 6379: { }, 6380: { replica_of: '10.10.10.10 6379' }, 6381: { replica_of: '10.10.10.10 6379' } } } }
      vars: { redis_cluster: redis-ms ,redis_password: 'redis.ms' ,redis_max_memory: 64MB }

    # ./mongo.yml -l pg-mongo
    pg-mongo:
      hosts: { 10.10.10.10: { mongo_seq: 1 } }
      vars:
        mongo_cluster: pg-mongo
        mongo_pgurl: 'postgres://dbuser_meta:[email protected]:5432/grafana'

    # ./kafka.yml -l kf-main
    kf-main:
      hosts: { 10.10.10.10: { kafka_seq: 1, kafka_role: controller } }
      vars:
        kafka_cluster: kf-main
        kafka_peer_port: 9093


  vars:                               # global variables
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: china                     # upstream mirror region: default|china|europe

    infra_portal:                     # infra services exposed via portal
      home         : { domain: i.pigsty }     # default domain name
      cc           : { domain: pigsty.cc      ,path:     "/www/pigsty.cc"   ,cert: /etc/cert/pigsty.cc.crt ,key: /etc/cert/pigsty.cc.key }
      minio        : { domain: m.pigsty.cc    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
      postgrest    : { domain: api.pigsty.cc  ,endpoint: "127.0.0.1:8884"   }
      pgadmin      : { domain: adm.pigsty.cc  ,endpoint: "127.0.0.1:8885"   }
      pgweb        : { domain: cli.pigsty.cc  ,endpoint: "127.0.0.1:8886"   }
      bytebase     : { domain: ddl.pigsty.cc  ,endpoint: "127.0.0.1:8887"   }
      jupyter      : { domain: lab.pigsty.cc  ,endpoint: "127.0.0.1:8888", websocket: true }
      gitea        : { domain: git.pigsty.cc  ,endpoint: "127.0.0.1:8889" }
      wiki         : { domain: wiki.pigsty.cc ,endpoint: "127.0.0.1:9002" }
      noco         : { domain: noco.pigsty.cc ,endpoint: "127.0.0.1:9003" }
      supa         : { domain: supa.pigsty.cc ,endpoint: "10.10.10.10:8000" ,websocket: true }
      dify         : { domain: dify.pigsty.cc ,endpoint: "10.10.10.10:8001" ,websocket: true }
      odoo         : { domain: odoo.pigsty.cc ,endpoint: "127.0.0.1:8069"   ,websocket: true }
      mm           : { domain: mm.pigsty.cc   ,endpoint: "10.10.10.10:8065" ,websocket: true }
    # scp -r ~/pgsty/cc/cert/*       pj:/etc/cert/       # copy https certs
    # scp -r ~/dev/pigsty.cc/public  pj:/www/pigsty.cc   # copy pigsty.cc website


    node_etc_hosts: [ "${admin_ip} sss.pigsty" ]
    node_timezone: Asia/Hong_Kong
    node_ntp_servers:
      - pool cn.pool.ntp.org iburst
      - pool ${admin_ip} iburst       # assume non-admin nodes does not have internet access
    pgbackrest_enabled: false         # do not take backups since this is disposable demo env
    #prometheus_options: '--storage.tsdb.retention.time=15d' # prometheus extra server options
    prometheus_options: '--storage.tsdb.retention.size=3GB' # keep 3GB data at most on demo env

    # install all postgresql18 extensions
    pg_version: 18                    # default postgres version
    repo_extra_packages: [ pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions: [pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl ] #,pg18-olap]

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    grafana_view_password: DBUser.Viewer
    pg_admin_password: DBUser.DBA
    pg_monitor_password: DBUser.Monitor
    pg_replication_password: DBUser.Replicator
    patroni_password: Patroni.API
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
    etcd_root_password: Etcd.Root
...

Explanation

The demo/demo template is Pigsty’s public demo configuration, showcasing a complete production-grade deployment example.

Key Features:

  • HTTPS certificate and custom domain configuration
  • All available PostgreSQL extensions installed
  • Integration with Redis, FerretDB, Kafka, and other components
  • Docker image acceleration configured

Use Cases:

  • Setting up public demo sites
  • Scenarios requiring complete feature demonstration
  • Learning Pigsty advanced configuration

Notes:

  • SSL certificate files must be prepared
  • DNS resolution must be configured
  • Some extensions are not available on ARM64 architecture

8.36 - demo/minio

Four-node x four-drive high-availability multi-node multi-disk MinIO cluster demo

The demo/minio configuration template demonstrates how to deploy a four-node x four-drive, 16-disk total high-availability MinIO cluster, providing S3-compatible object storage services.

For more tutorials, see the MINIO module documentation.


Overview

  • Config Name: demo/minio
  • Node Count: Four nodes
  • Description: High-availability multi-node multi-disk MinIO cluster demo
  • OS Distro: el8, el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64, aarch64
  • Related: meta

Usage:

./configure -c demo/minio

Note: This is a four-node template. You need to modify the IP addresses of the other three nodes after generating the configuration.


Content

Source: pigsty/conf/demo/minio.yml

---
#==============================================================#
# File      :   minio.yml
# Desc      :   pigsty: 4 node x 4 disk MNMD minio clusters
# Ctime     :   2023-01-07
# Mtime     :   2025-12-12
# Docs      :   https://doc.pgsty.com/config
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

# One pass installation with:
# ./deploy.yml
#==============================================================#
# 1.  minio-1 @ 10.10.10.10:9000 -  - (9002) svc <-x  10.10.10.9:9002
# 2.  minio-2 @ 10.10.10.11:9000 -xx- (9002) svc <-x <----------------
# 3.  minio-3 @ 10.10.10.12:9000 -xx- (9002) svc <-x  sss.pigsty:9002
# 4.  minio-4 @ 10.10.10.12:9000 -  - (9002) svc <-x  (intranet dns)
#==============================================================#
# use minio load balancer service (9002) instead of direct access (9000)
# mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO
#==============================================================#
# https://min.io/docs/minio/linux/operations/install-deploy-manage/deploy-minio-multi-node-multi-drive.html
# MINIO_VOLUMES="https://minio-{1...4}.pigsty:9000/data{1...4}/minio"


all:
  children:

    # infra cluster for proxy, monitor, alert, etc..
    infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }

    # minio cluster with 4 nodes and 4 drivers per node
    minio:
      hosts:
        10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
        10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
        10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
        10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
      vars:
        minio_cluster: minio
        minio_data: '/data{1...4}'
        minio_buckets:                    # list of minio bucket to be created
          - { name: pgsql }
          - { name: meta ,versioning: true }
          - { name: data }
        minio_users:                      # list of minio user to be created
          - { access_key: pgbackrest  ,secret_key: S3User.Backup ,policy: pgsql }
          - { access_key: s3user_meta ,secret_key: S3User.Meta   ,policy: meta  }
          - { access_key: s3user_data ,secret_key: S3User.Data   ,policy: data  }

        # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
        node_cluster: minio
        vip_enabled: true
        vip_vrid: 128
        vip_address: 10.10.10.9
        vip_interface: eth1

        # expose minio service with haproxy on all nodes
        haproxy_services:
          - name: minio                    # [REQUIRED] service name, unique
            port: 9002                     # [REQUIRED] service port, unique
            balance: leastconn             # [OPTIONAL] load balancer algorithm
            options:                       # [OPTIONAL] minio health check
              - option httpchk
              - option http-keep-alive
              - http-check send meth OPTIONS uri /minio/health/live
              - http-check expect status 200
            servers:
              - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
              - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

  vars:
    version: v4.0.0                   # pigsty version string
    admin_ip: 10.10.10.10             # admin node ip address
    region: default                   # upstream mirror region: default|china|europe
    infra_portal:                     # infra services exposed via portal
      home : { domain: i.pigsty }     # default domain name

      # domain names to access minio web console via nginx web portal (optional)
      minio        : { domain: m.pigsty     ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio10      : { domain: m10.pigsty   ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
      minio11      : { domain: m11.pigsty   ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
      minio12      : { domain: m12.pigsty   ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
      minio13      : { domain: m13.pigsty   ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }

    minio_endpoint: https://sss.pigsty:9002   # explicit overwrite minio endpoint with haproxy port
    node_etc_hosts: ["10.10.10.9 sss.pigsty"] # domain name to access minio from all nodes (required)

    #----------------------------------------------#
    # PASSWORD : https://doc.pgsty.com/config/security
    #----------------------------------------------#
    grafana_admin_password: pigsty
    haproxy_admin_password: pigsty
    minio_secret_key: S3User.MinIO
...

Explanation

The demo/minio template is a production-grade reference configuration for MinIO, showcasing Multi-Node Multi-Drive (MNMD) architecture.

Key Features:

  • Multi-Node Multi-Drive Architecture: 4 nodes × 4 drives = 16-drive erasure coding group
  • L2 VIP High Availability: Virtual IP binding via Keepalived
  • HAProxy Load Balancing: Unified access endpoint on port 9002
  • Fine-grained Permissions: Separate users and buckets for different applications

Access:

# Configure MinIO alias with mcli (via HAProxy load balancing)
mcli alias set sss https://sss.pigsty:9002 minioadmin S3User.MinIO

# List buckets
mcli ls sss/

# Use console
# Visit https://m.pigsty or https://m10-m13.pigsty

Use Cases:

  • Environments requiring S3-compatible object storage
  • PostgreSQL backup storage (pgBackRest remote repository)
  • Data lake for big data and AI workloads
  • Production environments requiring high-availability object storage

Notes:

  • Each node requires 4 independent disks mounted at /data1 - /data4
  • Production environments recommend at least 4 nodes for erasure coding redundancy
  • VIP requires proper network interface configuration (vip_interface)

8.37 - build/oss

Pigsty open-source edition offline package build environment configuration

The build/oss configuration template is the build environment configuration for Pigsty open-source edition offline packages, used to batch-build offline installation packages across multiple operating systems.

This configuration is intended for developers and contributors only.


Overview

  • Config Name: build/oss
  • Node Count: Six nodes (el9, el10, d12, d13, u22, u24)
  • Description: Pigsty open-source edition offline package build environment
  • OS Distro: el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64

Usage:

cp conf/build/oss.yml pigsty.yml

Note: This is a build template with fixed IP addresses, intended for internal use only.


Content

Source: pigsty/conf/build/oss.yml

---
#==============================================================#
# File      :   oss.yml
# Desc      :   Pigsty 3-node building env (PG18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-12
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}'
    repo_modules: infra,node,pgsql
    repo_packages: [ node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules ]
    repo_extra_packages: [pg18-core ,pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap ,pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]
    pg_extensions:                 [ pg18-time ,pg18-gis ,pg18-rag ,pg18-fts ,pg18-olap, pg18-feat ,pg18-lang ,pg18-type ,pg18-util ,pg18-func ,pg18-admin ,pg18-stat ,pg18-sec ,pg18-fdw ,pg18-sim ,pg18-etl]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1 ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 1, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 2, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 3, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 4, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 5, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 6, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 7, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

Explanation

The build/oss template is the build configuration for Pigsty open-source edition offline packages.

Build Contents:

  • PostgreSQL 18 and all categorized extension packages
  • Infrastructure packages (Prometheus, Grafana, Nginx, etc.)
  • Node packages (monitoring agents, tools, etc.)
  • Extra modules

Supported Operating Systems:

  • EL9 (Rocky/Alma/RHEL 9)
  • EL10 (Rocky 10 / RHEL 10)
  • Debian 12 (Bookworm)
  • Debian 13 (Trixie)
  • Ubuntu 22.04 (Jammy)
  • Ubuntu 24.04 (Noble)

Build Process:

# 1. Prepare build environment
cp conf/build/oss.yml pigsty.yml

# 2. Download packages on each node
./infra.yml -t repo_build

# 3. Package offline installation files
make cache

Use Cases:

  • Pigsty developers building new versions
  • Contributors testing new extensions
  • Enterprise users customizing offline packages

8.38 - build/pro

Pigsty professional edition offline package build environment configuration (multi-version)

The build/pro configuration template is the build environment configuration for Pigsty professional edition offline packages, including PostgreSQL 13-18 all versions and additional commercial components.

This configuration is intended for developers and contributors only.


Overview

  • Config Name: build/pro
  • Node Count: Six nodes (el9, el10, d12, d13, u22, u24)
  • Description: Pigsty professional edition offline package build environment (multi-version)
  • OS Distro: el9, el10, d12, d13, u22, u24
  • OS Arch: x86_64

Usage:

cp conf/build/pro.yml pigsty.yml

Note: This is a build template with fixed IP addresses, intended for internal use only.


Content

Source: pigsty/conf/build/pro.yml

---
#==============================================================#
# File      :   pro.yml
# Desc      :   Pigsty 6-node pro building env (PG 13-18)
# Ctime     :   2024-10-22
# Mtime     :   2025-12-15
# License   :   Apache-2.0 @ https://pigsty.io/docs/about/license/
# Copyright :   2018-2026  Ruohang Feng / Vonng ([email protected])
#==============================================================#

all:
  vars:
    version: v4.0.0
    admin_ip: 10.10.10.24
    region: china
    etcd_clean: true
    proxy_env:
      no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.myqcloud.com,*.tsinghua.edu.cn,*.pigsty.cc"

    # building spec
    pg_version: 18
    cache_pkg_dir: 'dist/${version}/pro'
    repo_modules: infra,node,pgsql
    pg_extensions: []
    repo_packages: [
      node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-utility, extra-modules,
      pg18-full,pg18-time,pg18-gis,pg18-rag,pg18-fts,pg18-olap,pg18-feat,pg18-lang,pg18-type,pg18-util,pg18-func,pg18-admin,pg18-stat,pg18-sec,pg18-fdw,pg18-sim,pg18-etl,
      pg17-full,pg17-time,pg17-gis,pg17-rag,pg17-fts,pg17-olap,pg17-feat,pg17-lang,pg17-type,pg17-util,pg17-func,pg17-admin,pg17-stat,pg17-sec,pg17-fdw,pg17-sim,pg17-etl,
      pg16-full,pg16-time,pg16-gis,pg16-rag,pg16-fts,pg16-olap,pg16-feat,pg16-lang,pg16-type,pg16-util,pg16-func,pg16-admin,pg16-stat,pg16-sec,pg16-fdw,pg16-sim,pg16-etl,
      pg15-full,pg15-time,pg15-gis,pg15-rag,pg15-fts,pg15-olap,pg15-feat,pg15-lang,pg15-type,pg15-util,pg15-func,pg15-admin,pg15-stat,pg15-sec,pg15-fdw,pg15-sim,pg15-etl,
      pg14-full,pg14-time,pg14-gis,pg14-rag,pg14-fts,pg14-olap,pg14-feat,pg14-lang,pg14-type,pg14-util,pg14-func,pg14-admin,pg14-stat,pg14-sec,pg14-fdw,pg14-sim,pg14-etl,
      pg13-full,pg13-time,pg13-gis,pg13-rag,pg13-fts,pg13-olap,pg13-feat,pg13-lang,pg13-type,pg13-util,pg13-func,pg13-admin,pg13-stat,pg13-sec,pg13-fdw,pg13-sim,pg13-etl,
      infra-extra, kafka, java-runtime, sealos, tigerbeetle, polardb, ivorysql
    ]

  children:
    #el8:  { hosts: { 10.10.10.8:  { pg_cluster: el8 ,pg_seq: 1  ,pg_role: primary }}}
    el9:  { hosts: { 10.10.10.9:  { pg_cluster: el9  ,pg_seq: 1 ,pg_role: primary }}}
    el10: { hosts: { 10.10.10.10: { pg_cluster: el10 ,pg_seq: 1 ,pg_role: primary }}}
    d12:  { hosts: { 10.10.10.12: { pg_cluster: d12  ,pg_seq: 1 ,pg_role: primary }}}
    d13:  { hosts: { 10.10.10.13: { pg_cluster: d13  ,pg_seq: 1 ,pg_role: primary }}}
    u22:  { hosts: { 10.10.10.22: { pg_cluster: u22  ,pg_seq: 1 ,pg_role: primary }}}
    u24:  { hosts: { 10.10.10.24: { pg_cluster: u24  ,pg_seq: 1 ,pg_role: primary }}}
    etcd: { hosts: { 10.10.10.24:  { etcd_seq: 1 }}, vars: { etcd_cluster: etcd    }}
    infra:
      hosts:
        #10.10.10.8:  { infra_seq: 9, admin_ip: 10.10.10.8  ,ansible_host: el8  } #, ansible_python_interpreter: /usr/bin/python3.12 }
        10.10.10.9:  { infra_seq: 1, admin_ip: 10.10.10.9  ,ansible_host: el9  }
        10.10.10.10: { infra_seq: 2, admin_ip: 10.10.10.10 ,ansible_host: el10 }
        10.10.10.12: { infra_seq: 3, admin_ip: 10.10.10.12 ,ansible_host: d12  }
        10.10.10.13: { infra_seq: 4, admin_ip: 10.10.10.13 ,ansible_host: d13  }
        10.10.10.22: { infra_seq: 5, admin_ip: 10.10.10.22 ,ansible_host: u22  }
        10.10.10.24: { infra_seq: 6, admin_ip: 10.10.10.24 ,ansible_host: u24  }
      vars: { node_conf: oltp }

...

Explanation

The build/pro template is the build configuration for Pigsty professional edition offline packages, containing more content than the open-source edition.

Differences from OSS Edition:

  • Includes all six major PostgreSQL versions 13-18
  • Includes additional commercial/enterprise components: Kafka, PolarDB, IvorySQL, etc.
  • Includes Java runtime and Sealos tools
  • Output directory is dist/${version}/pro/

Build Contents:

  • PostgreSQL 13, 14, 15, 16, 17, 18 all versions
  • All categorized extension packages for each version
  • Kafka message queue
  • PolarDB and IvorySQL kernels
  • TigerBeetle distributed database
  • Sealos container platform

Use Cases:

  • Enterprise customers requiring multi-version support
  • Need for Oracle/MySQL compatible kernels
  • Need for Kafka message queue integration
  • Long-term support versions (LTS) requirements

Build Process:

# 1. Prepare build environment
cp conf/build/pro.yml pigsty.yml

# 2. Download packages on each node
./infra.yml -t repo_build

# 3. Package offline installation files
make cache-pro

9 - Modules

10 - Module: PGSQL

Deploy and manage world’s most advanced open-source relational database — PostgreSQL, customizable and production-ready!

The world’s most advanced open-source relational database!

Pigsty brings it to full potential: batteries-included, reliable, observable, maintainable, and scalable! Config | Admin | Playbooks | Dashboards | Parameters


Overview

Learn key topics and concepts about PostgreSQL.


Config

Describe your desired PostgreSQL cluster


Admin

Manage your PostgreSQL clusters.


Playbooks

Use idempotent playbooks to materialize your config.

Example: Install PGSQL Module

asciicast

Example: Remove PGSQL Module

asciicast


Monitoring

Check PostgreSQL status via Grafana dashboards.

Pigsty has 26 PostgreSQL-related dashboards:

OverviewClusterInstanceDatabase
PGSQL OverviewPGSQL ClusterPGSQL InstancePGSQL Database
PGSQL AlertPGRDS ClusterPGRDS InstancePGCAT Database
PGSQL ShardPGSQL ActivityPGCAT InstancePGSQL Tables
PGSQL ReplicationPGSQL PersistPGSQL Table
PGSQL ServicePGSQL ProxyPGCAT Table
PGSQL DatabasesPGSQL PgbouncerPGSQL Query
PGSQL PatroniPGSQL SessionPGCAT Query
PGSQL PITRPGSQL XactsPGCAT Locks
PGSQL ExporterPGCAT Schema

Parameters

Config params for the PGSQL module

  • PG_ID: Calculate & validate PostgreSQL instance identity
  • PG_BUSINESS: PostgreSQL biz object definitions
  • PG_INSTALL: Install PostgreSQL kernel, pkgs & extensions
  • PG_BOOTSTRAP: Init HA PostgreSQL cluster with Patroni
  • PG_PROVISION: Create PostgreSQL users, databases & in-db objects
  • PG_BACKUP: Setup backup repo with pgbackrest
  • PG_ACCESS: Expose PostgreSQL services, bindVIP (optional), register DNS
  • PG_MONITOR: Add monitoring for PostgreSQL instance and register to infra
  • PG_REMOVE: Remove PostgreSQL cluster, instance and related resources
Full Parameter List
ParameterSectionTypeLevelDescription
pg_modePG_IDenumCpgsql cluster mode: pgsql,citus,gpsql
pg_clusterPG_IDstringCpgsql cluster name, REQUIRED identity param
pg_seqPG_IDintIpgsql instance seq number, REQUIRED identity param
pg_rolePG_IDenumIpgsql role, REQUIRED, could be primary,replica,offline
pg_instancesPG_IDdictIdefine multiple pg instances on node in {port:ins_vars} format
pg_upstreamPG_IDipIrepl upstream ip for standby cluster or cascade replica
pg_shardPG_IDstringCpgsql shard name, optional identity for sharding clusters
pg_groupPG_IDintCpgsql shard index number, optional identity for sharding clusters
gp_rolePG_IDenumCgreenplum role of this cluster, could be master or segment
pg_exportersPG_IDdictCadditional pg_exporters to monitor remote postgres instances
pg_offline_queryPG_IDboolIset true to enable offline query on this instance
pg_usersPG_BUSINESSuser[]Cpostgres biz users
pg_databasesPG_BUSINESSdatabase[]Cpostgres biz databases
pg_servicesPG_BUSINESSservice[]Cpostgres biz services
pg_hba_rulesPG_BUSINESShba[]Cbiz hba rules for postgres
pgb_hba_rulesPG_BUSINESShba[]Cbiz hba rules for pgbouncer
pg_replication_usernamePG_BUSINESSusernameGpostgres replication username, replicator by default
pg_replication_passwordPG_BUSINESSpasswordGpostgres replication password, DBUser.Replicator by default
pg_admin_usernamePG_BUSINESSusernameGpostgres admin username, dbuser_dba by default
pg_admin_passwordPG_BUSINESSpasswordGpostgres admin password in plain text, DBUser.DBA by default
pg_monitor_usernamePG_BUSINESSusernameGpostgres monitor username, dbuser_monitor by default
pg_monitor_passwordPG_BUSINESSpasswordGpostgres monitor password, DBUser.Monitor by default
pg_dbsu_passwordPG_BUSINESSpasswordG/Cdbsu password, empty string means no dbsu password by default
pg_dbsuPG_INSTALLusernameCos dbsu name, postgres by default, better not change it
pg_dbsu_uidPG_INSTALLintCos dbsu uid and gid, 26 for default postgres users and groups
pg_dbsu_sudoPG_INSTALLenumCdbsu sudo privilege, none,limit,all,nopass. limit by default
pg_dbsu_homePG_INSTALLpathCpostgresql home dir, /var/lib/pgsql by default
pg_dbsu_ssh_exchangePG_INSTALLboolCexchange postgres dbsu ssh key among same pgsql cluster
pg_versionPG_INSTALLenumCpostgres major version to install, 18 by default
pg_bin_dirPG_INSTALLpathCpostgres binary dir, /usr/pgsql/bin by default
pg_log_dirPG_INSTALLpathCpostgres log dir, /pg/log/postgres by default
pg_packagesPG_INSTALLstring[]Cpg pkgs to install, ${pg_version} will be replaced
pg_extensionsPG_INSTALLstring[]Cpg extensions to install, ${pg_version} will be replaced
pg_cleanPG_BOOTSTRAPboolG/C/Apurge existing postgres during pgsql init? true by default
pg_dataPG_BOOTSTRAPpathCpostgres data dir, /pg/data by default
pg_fs_mainPG_BOOTSTRAPpathCmountpoint/path for postgres main data, /data by default
pg_fs_bkupPG_BOOTSTRAPpathCmountpoint/path for pg backup data, /data/backup by default
pg_storage_typePG_BOOTSTRAPenumCstorage type for pg main data, SSD,HDD, SSD by default
pg_dummy_filesizePG_BOOTSTRAPsizeCsize of /pg/dummy, hold 64MB disk space for emergency use
pg_listenPG_BOOTSTRAPip(s)C/Ipostgres/pgbouncer listen addr, comma separated list
pg_portPG_BOOTSTRAPportCpostgres listen port, 5432 by default
pg_localhostPG_BOOTSTRAPpathCpostgres unix socket dir for localhost connection
pg_namespacePG_BOOTSTRAPpathCtop level key namespace in etcd, used by patroni & vip
patroni_enabledPG_BOOTSTRAPboolCif disabled, no postgres cluster will be created during init
patroni_modePG_BOOTSTRAPenumCpatroni working mode: default,pause,remove
patroni_portPG_BOOTSTRAPportCpatroni listen port, 8008 by default
patroni_log_dirPG_BOOTSTRAPpathCpatroni log dir, /pg/log/patroni by default
patroni_ssl_enabledPG_BOOTSTRAPboolGsecure patroni RestAPI comms with SSL?
patroni_watchdog_modePG_BOOTSTRAPenumCpatroni watchdog mode: automatic,required,off. off by default
patroni_usernamePG_BOOTSTRAPusernameCpatroni restapi username, postgres by default
patroni_passwordPG_BOOTSTRAPpasswordCpatroni restapi password, Patroni.API by default
pg_etcd_passwordPG_BOOTSTRAPpasswordCetcd password for this pg cluster, empty to use pg_cluster
pg_primary_dbPG_BOOTSTRAPstringCprimary database in this cluster, optional, postgres by default
pg_parametersPG_BOOTSTRAPdictCextra params in postgresql.auto.conf
pg_filesPG_BOOTSTRAPpath[]Cextra files to copy to postgres data dir
pg_confPG_BOOTSTRAPenumCconfig template: oltp,olap,crit,tiny. oltp.yml by default
pg_max_connPG_BOOTSTRAPintCpostgres max connections, auto will use recommended value
pg_shared_buffer_ratioPG_BOOTSTRAPfloatCpostgres shared buffer mem ratio, 0.25 by default, 0.1~0.4
pg_io_methodPG_BOOTSTRAPenumCio method for postgres: auto,sync,worker,io_uring, worker by default
pg_rtoPG_BOOTSTRAPintCrecovery time objective in seconds, 30s by default
pg_rpoPG_BOOTSTRAPintCrecovery point objective in bytes, 1MiB at most by default
pg_libsPG_BOOTSTRAPstringCpreloaded libs, timescaledb,pg_stat_statements,auto_explain by default
pg_delayPG_BOOTSTRAPintervalIreplication apply delay for standby cluster leader
pg_checksumPG_BOOTSTRAPboolCenable data checksum for postgres cluster?
pg_pwd_encPG_BOOTSTRAPenumCpassword encryption algo: md5,scram-sha-256
pg_encodingPG_BOOTSTRAPenumCdatabase cluster encoding, UTF8 by default
pg_localePG_BOOTSTRAPenumCdatabase cluster locale, C by default
pg_lc_collatePG_BOOTSTRAPenumCdatabase cluster collate, C by default
pg_lc_ctypePG_BOOTSTRAPenumCdatabase char type, C by default
pgsodium_keyPG_BOOTSTRAPstringCpgsodium key, 64 hex digit, default to sha256(pg_cluster)
pgsodium_getkey_scriptPG_BOOTSTRAPpathCpgsodium getkey script path
pgbouncer_enabledPG_ACCESSboolCif disabled, pgbouncer will not be launched on pgsql host
pgbouncer_portPG_ACCESSportCpgbouncer listen port, 6432 by default
pgbouncer_log_dirPG_ACCESSpathCpgbouncer log dir, /pg/log/pgbouncer by default
pgbouncer_auth_queryPG_ACCESSboolCquery postgres to retrieve unlisted biz users?
pgbouncer_poolmodePG_ACCESSenumCpooling mode: transaction,session,statement, transaction by default
pgbouncer_sslmodePG_ACCESSenumCpgbouncer client ssl mode, disable by default
pgbouncer_ignore_paramPG_ACCESSstring[]Cpgbouncer ignore_startup_parameters list
pg_provisionPG_PROVISIONboolCprovision postgres cluster after bootstrap
pg_initPG_PROVISIONstringG/Cprovision init script for cluster template, pg-init by default
pg_default_rolesPG_PROVISIONrole[]G/Cdefault roles and users in postgres cluster
pg_default_privilegesPG_PROVISIONstring[]G/Cdefault privileges when created by admin user
pg_default_schemasPG_PROVISIONstring[]G/Cdefault schemas to be created
pg_default_extensionsPG_PROVISIONextension[]G/Cdefault extensions to be created
pg_reloadPG_PROVISIONboolAreload postgres after hba changes
pg_default_hba_rulesPG_PROVISIONhba[]G/Cpostgres default host-based auth rules
pgb_default_hba_rulesPG_PROVISIONhba[]G/Cpgbouncer default host-based auth rules
pgbackrest_enabledPG_BACKUPboolCenable pgbackrest on pgsql host?
pgbackrest_cleanPG_BACKUPboolCremove pg backup data during init?
pgbackrest_log_dirPG_BACKUPpathCpgbackrest log dir, /pg/log/pgbackrest by default
pgbackrest_methodPG_BACKUPenumCpgbackrest repo method: local,minio,etc…
pgbackrest_init_backupPG_BACKUPboolCtake a full backup after pgbackrest init?
pgbackrest_repoPG_BACKUPdictG/Cpgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
pg_weightPG_ACCESSintIrelative load balance weight in service, 100 by default, 0-255
pg_service_providerPG_ACCESSenumG/Cdedicated haproxy node group name, or empty string for local nodes by default
pg_default_service_destPG_ACCESSenumG/Cdefault service dest if svc.dest=‘default’
pg_default_servicesPG_ACCESSservice[]G/Cpostgres default service definitions
pg_vip_enabledPG_ACCESSboolCenable L2 VIP for pgsql primary? false by default
pg_vip_addressPG_ACCESScidr4Cvip addr in <ipv4>/<mask> format, required if vip is enabled
pg_vip_interfacePG_ACCESSstringC/Ivip network interface to listen, eth0 by default
pg_dns_suffixPG_ACCESSstringCpgsql dns suffix, ’’ by default
pg_dns_targetPG_ACCESSenumCauto, primary, vip, none, or ad hoc ip
pg_exporter_enabledPG_MONITORboolCenable pg_exporter on pgsql hosts?
pg_exporter_configPG_MONITORstringCpg_exporter config file name
pg_exporter_cache_ttlsPG_MONITORstringCpg_exporter collector ttl stage in seconds, ‘1,10,60,300’ by default
pg_exporter_portPG_MONITORportCpg_exporter listen port, 9630 by default
pg_exporter_paramsPG_MONITORstringCextra url params for pg_exporter dsn
pg_exporter_urlPG_MONITORpgurlCoverwrite auto-gen pg dsn if specified
pg_exporter_auto_discoveryPG_MONITORboolCenable auto database discovery? enabled by default
pg_exporter_exclude_databasePG_MONITORstringCcsv of database that WILL NOT be monitored during auto-discovery
pg_exporter_include_databasePG_MONITORstringCcsv of database that WILL BE monitored during auto-discovery
pg_exporter_connect_timeoutPG_MONITORintCpg_exporter connect timeout in ms, 200 by default
pg_exporter_optionsPG_MONITORargCoverwrite extra options for pg_exporter
pgbouncer_exporter_enabledPG_MONITORboolCenable pgbouncer_exporter on pgsql hosts?
pgbouncer_exporter_portPG_MONITORportCpgbouncer_exporter listen port, 9631 by default
pgbouncer_exporter_urlPG_MONITORpgurlCoverwrite auto-gen pgbouncer dsn if specified
pgbouncer_exporter_optionsPG_MONITORargCoverwrite extra options for pgbouncer_exporter
pgbackrest_exporter_enabledPG_MONITORboolCenable pgbackrest_exporter on pgsql hosts?
pgbackrest_exporter_portPG_MONITORportCpgbackrest_exporter listen port, 9854 by default
pgbackrest_exporter_optionsPG_MONITORargCoverwrite extra options for pgbackrest_exporter
pg_safeguardPG_REMOVEboolG/C/Aprevent purging running postgres instance? false by default
pg_rm_dataPG_REMOVEboolG/C/Aremove postgres data during remove? true by default
pg_rm_backupPG_REMOVEboolG/C/Aremove pgbackrest backup during primary remove? true by default
pg_rm_pkgPG_REMOVEboolG/C/Auninstall postgres pkgs during remove? true by default

Tutorials

Tutorials for using/managing PostgreSQL in Pigsty.

  • Clone an existing PostgreSQL cluster
  • Create an online standby cluster of existing PostgreSQL cluster
  • Create a delayed standby cluster of existing PostgreSQL cluster
  • Monitor an existing postgres instance
  • Migrate from external PostgreSQL to Pigsty-managed PostgreSQL using logical replication
  • Use MinIO as centralized pgBackRest backup repo
  • Use dedicated etcd cluster as PostgreSQL / Patroni DCS
  • Use dedicated haproxy load balancer cluster to expose PostgreSQL services
  • Use pg-meta CMDB instead of pigsty.yml as inventory source
  • Use PostgreSQL as Grafana backend storage
  • Use PostgreSQL as Prometheus backend storage

10.1 - Core Concepts

Core concepts and architecture design

10.2 - Configuration

Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.

Pigsty is a “configuration-driven” PostgreSQL platform: all behaviors come from the combination of inventory files in ~/pigsty/conf/*.yml and PGSQL parameters. Once you’ve written the configuration, you can replicate a customized cluster with instances, users, databases, access control, extensions, and tuning policies in just a few minutes.


Configuration Entry

  1. Prepare Inventory: Copy a pigsty/conf/*.yml template or write an Ansible Inventory from scratch, placing cluster groups (all.children.<cls>.hosts) and global variables (all.vars) in the same file.
  2. Define Parameters: Override the required PGSQL parameters in the vars block. The override order from global → cluster → host determines the final value.
  3. Apply Configuration: Run ./configure -c <conf> or bin/pgsql-add <cls> and other playbooks to apply the configuration. Pigsty will generate the configuration files needed for Patroni/pgbouncer/pgbackrest based on the parameters.

Pigsty’s default demo inventory conf/pgsql.yml is a minimal example: one pg-meta cluster, global pg_version: 18, and a few business user and database definitions. You can expand with more clusters from this base.


Focus Areas & Documentation Index

Pigsty’s PostgreSQL configuration can be organized from the following dimensions. Subsequent documentation will explain “how to configure” each:

  • Cluster & Instances: Define instance topology (standalone, primary-replica, standby cluster, delayed cluster, Citus, etc.) through pg_cluster / pg_role / pg_seq / pg_upstream.
  • Kernel Version: Select the core version, flavor, and tuning templates using pg_version, pg_mode, pg_packages, pg_extensions, pg_conf, and other parameters.
  • Users/Roles: Declare system roles, business accounts, password policies, and connection pool attributes in pg_default_roles and pg_users.
  • Database Objects: Create databases as needed using pg_databases, baseline, schemas, extensions, pool_* fields and automatically integrate with pgbouncer/Grafana.
  • Access Control (HBA): Maintain host-based authentication policies using pg_default_hba_rules and pg_hba_rules to ensure access boundaries for different roles/networks.
  • Privilege Model (ACL): Converge object privileges through pg_default_privileges, pg_default_roles, pg_revoke_public parameters, providing an out-of-the-box layered role system.

After understanding these parameters, you can write declarative inventory manifests as “configuration as infrastructure” for any business requirement. Pigsty will handle execution and ensure idempotency.


A Typical Example

The following snippet shows how to control instance topology, kernel version, extensions, users, and databases in the same configuration file:

all:
  children:
    pg-analytics:
      hosts:
        10.10.10.11: { pg_seq: 1, pg_role: primary }
        10.10.10.12: { pg_seq: 2, pg_role: replica, pg_offline_query: true }
      vars:
        pg_cluster: pg-analytics
        pg_conf: olap.yml
        pg_extensions: [ postgis, timescaledb, pgvector ]
        pg_databases:
          - { name: bi, owner: dbuser_bi, schemas: [mart], extensions: [timescaledb], pool_mode: session }
        pg_users:
          - { name: dbuser_bi, password: DBUser.BI, roles: [dbrole_admin], pgbouncer: true }
  vars:
    pg_version: 17
    pg_packages: [ pgsql-main pgsql-common ]
    pg_hba_rules:
      - { user: dbuser_bi, db: bi, addr: intra, auth: ssl, title: 'BI only allows intranet SSL access' }
  • The pg-analytics cluster contains one primary and one offline replica.
  • Global settings specify pg_version: 17 with a set of extension examples and load olap.yml tuning.
  • Declare business objects in pg_databases and pg_users, automatically generating schema/extension and connection pool entries.
  • Additional pg_hba_rules restrict access sources and authentication methods.

Modify and apply this inventory to get a customized PostgreSQL cluster without manual configuration.

10.2.1 - Cluster / Instance

Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.

Choose the appropriate instance and cluster types based on your requirements to configure PostgreSQL database clusters that meet your needs.

You can define different types of instances and clusters. Here are several common PostgreSQL instance/cluster types in Pigsty:

  • Primary: Define a single instance cluster.
  • Replica: Define a basic HA cluster with one primary and one replica.
  • Offline: Define an instance dedicated to OLAP/ETL/interactive queries
  • Sync Standby: Enable synchronous commit to ensure no data loss.
  • Quorum Commit: Use quorum sync commit for a higher consistency level.
  • Standby Cluster: Clone an existing cluster and follow it
  • Delayed Cluster: Clone an existing cluster for emergency data recovery
  • Citus Cluster: Define a Citus distributed database cluster

Primary

We start with the simplest case: a single instance cluster consisting of one primary:

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
  vars:
    pg_cluster: pg-test

This configuration is concise and self-describing, consisting only of identity parameters. Note that the Ansible Group name should match pg_cluster.

Use the following command to create this cluster:

bin/pgsql-add pg-test

For demos, development testing, hosting temporary requirements, or performing non-critical analytical tasks, a single database instance may not be a big problem. However, such a single-node cluster has no high availability. When hardware failures occur, you’ll need to use PITR or other recovery methods to ensure the cluster’s RTO/RPO. For this reason, you may consider adding several read-only replicas to the cluster.


Replica

To add a read-only replica instance, you can add a new node to pg-test and set its pg_role to replica.

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }  # <--- newly added replica
  vars:
    pg_cluster: pg-test

If the entire cluster doesn’t exist, you can directly create the complete cluster. If the cluster primary has already been initialized, you can add a replica to the existing cluster:

bin/pgsql-add pg-test               # initialize the entire cluster at once
bin/pgsql-add pg-test 10.10.10.12   # add replica to existing cluster

When the cluster primary fails, the read-only instance (Replica) can take over the primary’s work with the help of the high availability system. Additionally, read-only instances can be used to execute read-only queries: many businesses have far more read requests than write requests, and most read-only query loads can be handled by replica instances.


Offline

Offline instances are dedicated read-only replicas specifically for serving slow queries, ETL, OLAP traffic, and interactive queries. Slow queries/long transactions have adverse effects on the performance and stability of online business, so it’s best to isolate them from online business.

To add an offline instance, assign it a new instance and set pg_role to offline.

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: offline }  # <--- newly added offline replica
  vars:
    pg_cluster: pg-test

Dedicated offline instances work similarly to common replica instances, but they serve as backup servers in the pg-test-replica service. That is, only when all replica instances are down will the offline and primary instances provide this read-only service.

In many cases, database resources are limited, and using a separate server as an offline instance is not economical. As a compromise, you can select an existing replica instance and mark it with the pg_offline_query flag to indicate it can handle “offline queries”. In this case, this read-only replica will handle both online read-only requests and offline queries. You can use pg_default_hba_rules and pg_hba_rules for additional access control on offline instances.


Sync Standby

When Sync Standby is enabled, PostgreSQL will select one replica as the sync standby, with all other replicas as candidates. The primary database will wait for the standby instance to flush to disk before confirming commits. The standby instance always has the latest data with no replication lag, and primary-standby switchover to the sync standby will have no data loss.

PostgreSQL uses asynchronous streaming replication by default, which may have small replication lag (on the order of 10KB/10ms). When the primary fails, there may be a small data loss window (which can be controlled using pg_rpo), but this is acceptable for most scenarios.

However, in some critical scenarios (e.g., financial transactions), data loss is completely unacceptable, or read replication lag is unacceptable. In such cases, you can use synchronous commit to solve this problem. To enable sync standby mode, you can simply use the crit.yml template in pg_conf.

pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica }
    10.10.10.13: { pg_seq: 3, pg_role: replica }
  vars:
    pg_cluster: pg-test
    pg_conf: crit.yml   # <--- use crit template

To enable sync standby on an existing cluster, configure the cluster and enable synchronous_mode:

$ pg edit-config pg-test    # run as admin user on admin node
+++
-synchronous_mode: false    # <--- old value
+synchronous_mode: true     # <--- new value
 synchronous_mode_strict: false

Apply these changes? [y/N]: y

In this case, the PostgreSQL configuration parameter synchronous_standby_names is automatically managed by Patroni. One replica will be elected as the sync standby, and its application_name will be written to the PostgreSQL primary configuration file and applied.


Quorum Commit

Quorum Commit provides more powerful control than sync standby: especially when you have multiple replicas, you can set criteria for successful commits, achieving higher/lower consistency levels (and trade-offs with availability).

If you want at least two replicas to confirm commits, you can adjust the synchronous_node_count parameter through Patroni cluster configuration and apply it:

synchronous_mode: true          # ensure synchronous commit is enabled
synchronous_node_count: 2       # specify "at least" how many replicas must successfully commit

If you want to use more sync replicas, modify the synchronous_node_count value. When the cluster size changes, you should ensure this configuration is still valid to avoid service unavailability.

In this case, the PostgreSQL configuration parameter synchronous_standby_names is automatically managed by Patroni.

synchronous_standby_names = '2 ("pg-test-3","pg-test-2")'
Example: Using multiple sync standbys
$ pg edit-config pg-test
---
+synchronous_node_count: 2

Apply these changes? [y/N]: y

After applying the configuration, two sync standbys appear.

+ Cluster: pg-test (7080814403632534854) +---------+----+-----------+-----------------+
| Member    | Host        | Role         | State   | TL | Lag in MB | Tags            |
+-----------+-------------+--------------+---------+----+-----------+-----------------+
| pg-test-1 | 10.10.10.10 | Leader       | running |  1 |           | clonefrom: true |
| pg-test-2 | 10.10.10.11 | Sync Standby | running |  1 |         0 | clonefrom: true |
| pg-test-3 | 10.10.10.12 | Sync Standby | running |  1 |         0 | clonefrom: true |
+-----------+-------------+--------------+---------+----+-----------+-----------------+

Another scenario is using any n replicas to confirm commits. In this case, the configuration is slightly different. For example, if we only need any one replica to confirm commits:

synchronous_mode: quorum        # use quorum commit
postgresql:
  parameters:                   # modify PostgreSQL's configuration parameter synchronous_standby_names, using `ANY n ()` syntax
    synchronous_standby_names: 'ANY 1 (*)'  # you can specify a specific replica list or use * to wildcard all replicas.
Example: Enable ANY quorum commit
$ pg edit-config pg-test

+    synchronous_standby_names: 'ANY 1 (*)' # in ANY mode, this parameter is needed
- synchronous_node_count: 2  # in ANY mode, this parameter is not needed

Apply these changes? [y/N]: y

After applying, the configuration takes effect, and all standbys become regular replicas in Patroni. However, in pg_stat_replication, you can see sync_state becomes quorum.


Standby Cluster

You can clone an existing cluster and create a standby cluster for data migration, horizontal splitting, multi-region deployment, or disaster recovery.

Under normal circumstances, the standby cluster will follow the upstream cluster and keep content synchronized. You can promote the standby cluster to become a truly independent cluster.

The standby cluster definition is basically the same as a normal cluster definition, except that the pg_upstream parameter is additionally defined on the primary. The primary of the standby cluster is called the Standby Leader.

For example, below defines a pg-test cluster and its standby cluster pg-test2. The configuration inventory might look like this:

# pg-test is the original cluster
pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
  vars: { pg_cluster: pg-test }

# pg-test2 is the standby cluster of pg-test
pg-test2:
  hosts:
    10.10.10.12: { pg_seq: 1, pg_role: primary , pg_upstream: 10.10.10.11 } # <--- pg_upstream defined here
    10.10.10.13: { pg_seq: 2, pg_role: replica }
  vars: { pg_cluster: pg-test2 }

The primary node pg-test2-1 of the pg-test2 cluster will be a downstream replica of pg-test and serve as the Standby Leader in the pg-test2 cluster.

Just ensure the pg_upstream parameter is configured on the standby cluster’s primary node to automatically pull backups from the original upstream.

bin/pgsql-add pg-test     # create original cluster
bin/pgsql-add pg-test2    # create standby cluster
Example: Change replication upstream

If necessary (e.g., upstream primary-standby switchover/failover), you can change the standby cluster’s replication upstream through cluster configuration.

To do this, simply change standby_cluster.host to the new upstream IP address and apply.

$ pg edit-config pg-test2

 standby_cluster:
   create_replica_methods:
   - basebackup
-  host: 10.10.10.13     # <--- old upstream
+  host: 10.10.10.12     # <--- new upstream
   port: 5432

 Apply these changes? [y/N]: y
Example: Promote standby cluster

You can promote the standby cluster to an independent cluster at any time, so the cluster can independently handle write requests and diverge from the original cluster.

To do this, you must configure the cluster and completely erase the standby_cluster section, then apply.

$ pg edit-config pg-test2
-standby_cluster:
-  create_replica_methods:
-  - basebackup
-  host: 10.10.10.11
-  port: 5432

Apply these changes? [y/N]: y
Example: Cascade replication

If you specify pg_upstream on a replica instead of the primary, you can configure cascade replication for the cluster.

When configuring cascade replication, you must use the IP address of an instance in the cluster as the parameter value, otherwise initialization will fail. The replica performs streaming replication from a specific instance rather than the primary.

The instance acting as a WAL relay is called a Bridge Instance. Using a bridge instance can share the burden of sending WAL from the primary. When you have dozens of replicas, using bridge instance cascade replication is a good idea.

pg-test:
  hosts: # pg-test-1 ---> pg-test-2 ---> pg-test-3
    10.10.10.11: { pg_seq: 1, pg_role: primary }
    10.10.10.12: { pg_seq: 2, pg_role: replica } # <--- bridge instance
    10.10.10.13: { pg_seq: 3, pg_role: replica, pg_upstream: 10.10.10.12 }
    # ^--- replicate from pg-test-2 (bridge) instead of pg-test-1 (primary)
  vars: { pg_cluster: pg-test }

Delayed Cluster

A Delayed Cluster is a special type of standby cluster used to quickly recover “accidentally deleted” data.

For example, if you want a cluster named pg-testdelay whose data content is the same as the pg-test cluster from one hour ago:

# pg-test is the original cluster
pg-test:
  hosts:
    10.10.10.11: { pg_seq: 1, pg_role: primary }
  vars: { pg_cluster: pg-test }

# pg-testdelay is the delayed cluster of pg-test
pg-testdelay:
  hosts:
    10.10.10.12: { pg_seq: 1, pg_role: primary , pg_upstream: 10.10.10.11, pg_delay: 1d }
    10.10.10.13: { pg_seq: 2, pg_role: replica }
  vars: { pg_cluster: pg-testdelay }

You can also configure a “replication delay” on an existing standby cluster.

$ pg edit-config pg-testdelay
 standby_cluster:
   create_replica_methods:
   - basebackup
   host: 10.10.10.11
   port: 5432
+  recovery_min_apply_delay: 1h    # <--- add delay duration here, e.g. 1 hour

Apply these changes? [y/N]: y

When some tuples and tables are accidentally deleted, you can modify this parameter to advance this delayed cluster to an appropriate point in time, read data from it, and quickly fix the original cluster.

Delayed clusters require additional resources, but are much faster than PITR and have much less impact on the system. For very critical clusters, consider setting up delayed clusters.


Citus Cluster

Pigsty natively supports Citus. You can refer to files/pigsty/citus.yml and prod.yml as examples.

To define a Citus cluster, you need to specify the following parameters:

  • pg_mode must be set to citus, not the default pgsql
  • The shard name pg_shard and shard number pg_group must be defined on each shard cluster
  • pg_primary_db must be defined to specify the database managed by Patroni.
  • If you want to use pg_dbsu postgres instead of the default pg_admin_username to execute admin commands, then pg_dbsu_password must be set to a non-empty plaintext password

Additionally, extra hba rules are needed to allow SSL access from localhost and other data nodes. As shown below:

all:
  children:
    pg-citus0: # citus shard 0
      hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0 , pg_group: 0 }
    pg-citus1: # citus shard 1
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1 , pg_group: 1 }
    pg-citus2: # citus shard 2
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus2 , pg_group: 2 }
    pg-citus3: # citus shard 3
      hosts:
        10.10.10.13: { pg_seq: 1, pg_role: primary }
        10.10.10.14: { pg_seq: 2, pg_role: replica }
      vars: { pg_cluster: pg-citus3 , pg_group: 3 }
  vars:                               # global parameters for all Citus clusters
    pg_mode: citus                    # pgsql cluster mode must be set to: citus
    pg_shard: pg-citus                # citus horizontal shard name: pg-citus
    pg_primary_db: meta               # citus database name: meta
    pg_dbsu_password: DBUser.Postgres # if using dbsu, need to configure a password for it
    pg_users: [ { name: dbuser_meta ,password: DBUser.Meta ,pgbouncer: true ,roles: [ dbrole_admin ] } ]
    pg_databases: [ { name: meta ,extensions: [ { name: citus }, { name: postgis }, { name: timescaledb } ] } ]
    pg_hba_rules:
      - { user: 'all' ,db: all  ,addr: 127.0.0.1/32 ,auth: ssl ,title: 'all user ssl access from localhost' }
      - { user: 'all' ,db: all  ,addr: intra        ,auth: ssl ,title: 'all user ssl access from intranet'  }

On the coordinator node, you can create distributed tables and reference tables and query them from any data node. Starting from 11.2, any Citus database node can act as a coordinator.

SELECT create_distributed_table('pgbench_accounts', 'aid'); SELECT truncate_local_data_after_distributing_table($$public.pgbench_accounts$$);
SELECT create_reference_table('pgbench_branches')         ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_branches$$);
SELECT create_reference_table('pgbench_history')          ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_history$$);
SELECT create_reference_table('pgbench_tellers')          ; SELECT truncate_local_data_after_distributing_table($$public.pgbench_tellers$$);

10.2.2 - Kernel Version

How to choose the appropriate PostgreSQL kernel and major version.

Choosing a “kernel” in Pigsty means determining the PostgreSQL major version, mode/distribution, packages to install, and tuning templates to load.

Pigsty supports PostgreSQL from version 10 onwards. The current version packages core software for versions 13-18 by default and provides a complete extension set for 17/18. The following content shows how to make these choices through configuration files.


Major Version and Packages

  • pg_version: Specify the PostgreSQL major version (default 18). Pigsty will automatically map to the correct package name prefix based on the version.
  • pg_packages: Define the core package set to install, supports using package aliases (default pgsql-main pgsql-common, includes kernel + patroni/pgbouncer/pgbackrest and other common tools).
  • pg_extensions: List of additional extension packages to install, also supports aliases; defaults to empty meaning only core dependencies are installed.
all:
  vars:
    pg_version: 18
    pg_packages: [ pgsql-main pgsql-common ]
    pg_extensions: [ postgis, timescaledb, pgvector, pgml ]

Effect: Ansible will pull packages corresponding to pg_version=18 during installation, pre-install extensions to the system, and database initialization scripts can then directly CREATE EXTENSION.

Extension support varies across versions in Pigsty’s offline repository: 12/13 only provide core and tier-1 extensions, while 15/17/18 cover all extensions. If an extension is not pre-packaged, it can be added via repo_packages_extra.


Kernel Mode (pg_mode)

pg_mode controls the kernel “flavor” to deploy. Default pgsql indicates standard PostgreSQL. Pigsty currently supports the following modes:

ModeScenario
pgsqlStandard PostgreSQL, HA + replication
citusCitus distributed cluster, requires additional pg_shard / pg_group
gpsqlGreenplum / MatrixDB
mssqlBabelfish for PostgreSQL
mysqlOpenGauss/HaloDB compatible with MySQL protocol
polarAlibaba PolarDB (based on pg polar distribution)
ivoryIvorySQL (Oracle-compatible syntax)
orioleOrioleDB storage engine
oraclePostgreSQL + ora compatibility (pg_mode: oracle)

After selecting a mode, Pigsty will automatically load corresponding templates, dependency packages, and Patroni configurations. For example, deploying Citus:

all:
  children:
    pg-citus0:
      hosts: { 10.10.10.11: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus0, pg_group: 0 }
    pg-citus1:
      hosts: { 10.10.10.12: { pg_seq: 1, pg_role: primary } }
      vars: { pg_cluster: pg-citus1, pg_group: 1 }
  vars:
    pg_mode: citus
    pg_shard: pg-citus
    patroni_citus_db: meta

Effect: All members will install Citus-related packages, Patroni writes to etcd in shard mode, and automatically CREATE EXTENSION citus in the meta database.


Extensions and Pre-installed Objects

Besides system packages, you can control components automatically loaded after database startup through the following parameters:

  • pg_libs: List to write to shared_preload_libraries. For example: pg_libs: 'timescaledb, pg_stat_statements, auto_explain'.
  • pg_default_extensions / pg_default_schemas: Control schemas and extensions pre-created in template1 and postgres by initialization scripts.
  • pg_parameters: Append ALTER SYSTEM SET for all instances (written to postgresql.auto.conf).

Example: Enable TimescaleDB, pgvector and customize some system parameters.

pg-analytics:
  vars:
    pg_cluster: pg-analytics
    pg_libs: 'timescaledb, pg_stat_statements, pgml'
    pg_default_extensions:
      - { name: timescaledb }
      - { name: pgvector }
    pg_parameters:
      timescaledb.max_background_workers: 8
      shared_preload_libraries: "'timescaledb,pg_stat_statements,pgml'"

Effect: During initialization, template1 creates extensions, Patroni’s postgresql.conf injects corresponding parameters, and all business databases inherit these settings.


Tuning Template (pg_conf)

pg_conf points to Patroni templates in roles/pgsql/templates/*.yml. Pigsty includes four built-in general templates:

TemplateApplicable Scenario
oltp.ymlDefault template, for 4–128 core TP workload
olap.ymlOptimized for analytical scenarios
crit.ymlEmphasizes sync commit/minimal latency, suitable for zero-loss scenarios like finance
tiny.ymlLightweight machines / edge scenarios / resource-constrained environments

You can directly replace the template or customize a YAML file in templates/, then specify it in cluster vars.

pg-ledger:
  hosts: { 10.10.10.21: { pg_seq: 1, pg_role: primary } }
  vars:
    pg_cluster: pg-ledger
    pg_conf: crit.yml
    pg_parameters:
      synchronous_commit: 'remote_apply'
      max_wal_senders: 16
      wal_keep_size: '2GB'

Effect: Copy crit.yml as Patroni configuration, overlay pg_parameters written to postgresql.auto.conf, making instances run immediately in synchronous commit mode.


Combined Instance: A Complete Example

pg-rag:
  hosts:
    10.10.10.31: { pg_seq: 1, pg_role: primary }
    10.10.10.32: { pg_seq: 2, pg_role: replica }
  vars:
    pg_cluster: pg-rag
    pg_version: 18
    pg_mode: pgsql
    pg_conf: olap.yml
    pg_packages: [ pgsql-main pgsql-common ]
    pg_extensions: [ pgvector, pgml, postgis ]
    pg_libs: 'pg_stat_statements, pgvector, pgml'
    pg_parameters:
      max_parallel_workers: 8
      shared_buffers: '32GB'
  • First primary + one replica, using olap.yml tuning.
  • Install PG18 + RAG common extensions, automatically load pgvector/pgml at system level.
  • Patroni/pgbouncer/pgbackrest generated by Pigsty, no manual intervention needed.

Replace the above parameters according to business needs to complete all kernel-level customization.

10.2.3 - Package Alias

Pigsty provides a package alias translation mechanism that shields the differences in binary package details across operating systems, making installation easier.

PostgreSQL package naming conventions vary significantly across different operating systems:

  • EL systems (RHEL/Rocky/Alma/…) use formats like pgvector_17, postgis36_17*
  • Debian/Ubuntu systems use formats like postgresql-17-pgvector, postgresql-17-postgis-3

This difference adds cognitive burden to users: you need to remember different package name rules for different systems, and handle the embedding of PostgreSQL version numbers.

Package Alias

Pigsty solves this problem through the Package Alias mechanism: you only need to use unified aliases, and Pigsty will handle all the details:

# Using aliases - simple, unified, cross-platform
pg_extensions: [ postgis, pgvector, timescaledb ]

# Equivalent to actual package names on EL9 + PG17
pg_extensions: [ postgis36_17*, pgvector_17*, timescaledb-tsl_17* ]

# Equivalent to actual package names on Ubuntu 24 + PG17
pg_extensions: [ postgresql-17-postgis-3, postgresql-17-pgvector, postgresql-17-timescaledb-tsl ]

Alias Translation

Aliases can also group a set of packages as a whole. For example, Pigsty’s default installed packages - the default value of pg_packages is:

pg_packages:                      # pg packages to be installed, alias can be used
  - pgsql-main pgsql-common

Pigsty will query the current operating system alias list (assuming el10.x86_64) and translate it to PGSQL kernel, extensions, and toolkits:

pgsql-main:    "postgresql$v postgresql$v-server postgresql$v-libs postgresql$v-contrib postgresql$v-plperl postgresql$v-plpython3 postgresql$v-pltcl postgresql$v-llvmjit pg_repack_$v* wal2json_$v* pgvector_$v*"
pgsql-common:  "patroni patroni-etcd pgbouncer pgbackrest pg_exporter pgbackrest_exporter vip-manager"

Next, Pigsty further translates pgsql-main using the currently specified PG major version (assuming pg_version = 18):

pg18-main:   "postgresql18 postgresql18-server postgresql18-libs postgresql18-contrib postgresql18-plperl postgresql18-plpython3 postgresql18-pltcl postgresql18-llvmjit pg_repack_18* wal2json_18* pgvector_18*"

Through this approach, Pigsty shields the complexity of packages, allowing users to simply specify the functional components they want.


Which Variables Can Use Aliases?

You can use package aliases in the following four parameters, and the aliases will be automatically converted to actual package names according to the translation process:


Alias List

You can find the alias mapping files for each operating system and architecture in the roles/node_id/vars/ directory of the Pigsty project source code:


How It Works

Alias Translation Process

User config alias --> Detect OS -->  Find alias mapping table ---> Replace $v placeholder ---> Install actual packages
     ↓                 ↓                   ↓                                   ↓
  postgis          el9.x86_64         postgis36_$v*                   postgis36_17*
  postgis          u24.x86_64         postgresql-$v-postgis-3         postgresql-17-postgis-3

Version Placeholder

Pigsty’s alias system uses $v as a placeholder for the PostgreSQL version number. When you specify a PostgreSQL version using pg_version, all $v in aliases will be replaced with the actual version number.

For example, when pg_version: 17:

Alias Definition (EL)Expanded Result
postgresql$v*postgresql17*
pgvector_$v*pgvector_17*
timescaledb-tsl_$v*timescaledb-tsl_17*
Alias Definition (Debian/Ubuntu)Expanded Result
postgresql-$vpostgresql-17
postgresql-$v-pgvectorpostgresql-17-pgvector
postgresql-$v-timescaledb-tslpostgresql-17-timescaledb-tsl

Wildcard Matching

On EL systems, many aliases use the * wildcard to match related subpackages. For example:

  • postgis36_17* will match postgis36_17, postgis36_17-client, postgis36_17-utils, etc.
  • postgresql17* will match postgresql17, postgresql17-server, postgresql17-libs, postgresql17-contrib, etc.

This design ensures you don’t need to list each subpackage individually - one alias can install the complete extension.

10.2.4 - User/Role

How to define and customize PostgreSQL users and roles through configuration?

In this document, “user” refers to a logical object within a database cluster created with CREATE USER/ROLE.

In PostgreSQL, users belong directly to the database cluster rather than a specific database. Therefore, when creating business databases and users, follow the principle of “users first, databases later”.

Pigsty defines roles and users through two config parameters:

The former defines roles/users shared across the entire environment; the latter defines business roles/users specific to a single cluster. Both have the same format as arrays of user definition objects. Users/roles are created sequentially in array order, so later users can belong to roles defined earlier.

By default, all users marked with pgbouncer: true are added to the Pgbouncer connection pool user list.


Define Users

Example from Pigsty demo pg-meta cluster:

pg-meta:
  hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
  vars:
    pg_cluster: pg-meta
    pg_users:
      - {name: dbuser_meta     ,password: DBUser.Meta     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: pigsty admin user }
      - {name: dbuser_view     ,password: DBUser.Viewer   ,pgbouncer: true ,roles: [dbrole_readonly] ,comment: read-only viewer for meta database }
      - {name: dbuser_grafana  ,password: DBUser.Grafana  ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for grafana database    }
      - {name: dbuser_bytebase ,password: DBUser.Bytebase ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for bytebase database   }
      - {name: dbuser_kong     ,password: DBUser.Kong     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for kong api gateway    }
      - {name: dbuser_gitea    ,password: DBUser.Gitea    ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for gitea service       }
      - {name: dbuser_wiki     ,password: DBUser.Wiki     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for wiki.js service     }
      - {name: dbuser_noco     ,password: DBUser.Noco     ,pgbouncer: true ,roles: [dbrole_admin]    ,comment: admin user for nocodb service      }
      - {name: dbuser_remove   ,state: absent }  # use state: absent to delete user

Each user/role definition is a complex object. Only name is required:

- name: dbuser_meta               # REQUIRED, `name` is the only mandatory field
  state: create                   # Optional, user state: create (default), absent
  password: DBUser.Meta           # Optional, password, can be scram-sha-256 hash or plaintext
  login: true                     # Optional, can login, default true
  superuser: false                # Optional, is superuser, default false
  createdb: false                 # Optional, can create databases, default false
  createrole: false               # Optional, can create roles, default false
  inherit: true                   # Optional, inherit role privileges, default true
  replication: false              # Optional, can replicate, default false
  bypassrls: false                # Optional, bypass row-level security, default false
  connlimit: -1                   # Optional, connection limit, default -1 (unlimited)
  expire_in: 3650                 # Optional, expire N days from creation (priority over expire_at)
  expire_at: '2030-12-31'         # Optional, expiration date in YYYY-MM-DD format
  comment: pigsty admin user      # Optional, user comment
  roles: [dbrole_admin]           # Optional, roles array
  parameters:                     # Optional, role-level config params
    search_path: public
  pgbouncer: true                 # Optional, add to connection pool user list, default false
  pool_mode: transaction          # Optional, pgbouncer pool mode, default transaction
  pool_connlimit: -1              # Optional, user-level max pool connections, default -1

Parameter Overview

The only required field is name - a valid, unique username within the cluster. All other params have sensible defaults.

FieldCategoryTypeAttrDescription
nameBasicstringRequiredUsername, must be valid and unique
stateBasicenumOptionalState: create (default), absent
passwordBasicstringMutableUser password, plaintext or hash
commentBasicstringMutableUser comment
loginPrivilegeboolMutableCan login, default true
superuserPrivilegeboolMutableIs superuser, default false
createdbPrivilegeboolMutableCan create databases, default false
createrolePrivilegeboolMutableCan create roles, default false
inheritPrivilegeboolMutableInherit role privileges, default true
replicationPrivilegeboolMutableCan replicate, default false
bypassrlsPrivilegeboolMutableBypass RLS, default false
connlimitPrivilegeintMutableConnection limit, -1 unlimited
expire_inValidityintMutableExpire N days from now (priority)
expire_atValiditystringMutableExpiration date, YYYY-MM-DD format
rolesRolearrayAdditiveRoles array, string or object format
parametersParamsobjectMutableRole-level parameters
pgbouncerPoolboolMutableAdd to connection pool, default false
pool_modePoolenumMutablePool mode: transaction (default)
pool_connlimitPoolintMutablePool user max connections

Parameter Details

name

String, required. Username - must be unique within the cluster.

Must be a valid PostgreSQL identifier matching ^[a-z_][a-z0-9_]{0,62}$: starts with lowercase letter or underscore, contains only lowercase letters, digits, underscores, max 63 chars.

- name: dbuser_app         # Standard naming
- name: app_readonly       # Underscore separated
- name: _internal          # Underscore prefix (for internal roles)

state

Enum for user operation: create or absent. Default create.

StateDescription
createDefault, create user, update if exists
absentDelete user with DROP ROLE
- name: dbuser_app             # state defaults to create
- name: dbuser_old
  state: absent                # Delete user

These system users cannot be deleted via state: absent (to prevent cluster failure):

password

String, mutable. User password - users without password can’t login via password auth.

Password can be:

FormatExampleDescription
PlaintextDBUser.MetaNot recommended, logged to config
SCRAM-SHA-256SCRAM-SHA-256$4096:xxx$yyy:zzzRecommended, PG10+ default
MD5 hashmd5...Legacy compatibility
# Plaintext (not recommended, logged to config)
- name: dbuser_app
  password: MySecretPassword

# SCRAM-SHA-256 hash (recommended)
- name: dbuser_app
  password: 'SCRAM-SHA-256$4096:xxx$yyy:zzz'

When setting password, Pigsty temporarily disables logging to prevent leakage:

SET log_statement TO 'none';
ALTER USER "dbuser_app" PASSWORD 'xxx';
SET log_statement TO DEFAULT;

To generate SCRAM-SHA-256 hash:

# Using PostgreSQL (requires pgcrypto extension)
psql -c "SELECT encode(digest('password' || 'username', 'sha256'), 'hex')"

comment

String, mutable. User comment, defaults to business user {name}.

Set via COMMENT ON ROLE, supports special chars (quotes auto-escaped).

- name: dbuser_app
  comment: 'Main business application account'
COMMENT ON ROLE "dbuser_app" IS 'Main business application account';

login

Boolean, mutable. Can login, default true.

Setting false creates a Role rather than User - typically for permission grouping.

In PostgreSQL, CREATE USER equals CREATE ROLE ... LOGIN.

# Create login-able user
- name: dbuser_app
  login: true

# Create role (no login, for permission grouping)
- name: dbrole_custom
  login: false
  comment: custom permission role
CREATE USER "dbuser_app" LOGIN;
CREATE USER "dbrole_custom" NOLOGIN;

superuser

Boolean, mutable. Is superuser, default false.

Superusers have full database privileges, bypassing all permission checks.

- name: dbuser_admin
  superuser: true            # Dangerous: full privileges
ALTER USER "dbuser_admin" SUPERUSER;

Pigsty provides default superuser via pg_admin_username (dbuser_dba). Don’t create additional superusers unless necessary.

createdb

Boolean, mutable. Can create databases, default false.

- name: dbuser_dev
  createdb: true             # Allow create database
ALTER USER "dbuser_dev" CREATEDB;

Some applications (Gitea, Odoo, etc.) may require CREATEDB privilege for their admin users.

createrole

Boolean, mutable. Can create other roles, default false.

Users with CREATEROLE can create, modify, delete other non-superuser roles.

- name: dbuser_admin
  createrole: true           # Allow manage other roles
ALTER USER "dbuser_admin" CREATEROLE;

inherit

Boolean, mutable. Auto-inherit privileges from member roles, default true.

Setting false requires explicit SET ROLE to use member role privileges.

# Auto-inherit role privileges (default)
- name: dbuser_app
  inherit: true
  roles: [dbrole_readwrite]

# Requires explicit SET ROLE
- name: dbuser_special
  inherit: false
  roles: [dbrole_admin]
ALTER USER "dbuser_special" NOINHERIT;
-- User must execute SET ROLE dbrole_admin to get privileges

replication

Boolean, mutable. Can initiate streaming replication, default false.

Usually only replication users (replicator) need this. Normal users shouldn’t have it unless for logical decoding subscriptions.

- name: replicator
  replication: true          # Allow streaming replication
  roles: [pg_monitor, dbrole_readonly]
ALTER USER "replicator" REPLICATION;

bypassrls

Boolean, mutable. Bypass row-level security (RLS) policies, default false.

When enabled, user can access all rows even with RLS policies. Usually only for admins.

- name: dbuser_myappadmin
  bypassrls: true            # Bypass RLS policies
ALTER USER "dbuser_myappadmin" BYPASSRLS;

connlimit

Integer, mutable. Max concurrent connections, default -1 (unlimited).

Positive integer limits max simultaneous sessions for this user. Doesn’t affect superusers.

- name: dbuser_app
  connlimit: 100             # Max 100 concurrent connections

- name: dbuser_batch
  connlimit: 10              # Limit batch user connections
ALTER USER "dbuser_app" CONNECTION LIMIT 100;

expire_in

Integer, mutable. Expire N days from current date.

This param has higher priority than expire_at. Expiration recalculated on each playbook run - good for temp users needing periodic renewal.

- name: temp_user
  expire_in: 30              # Expire in 30 days

- name: contractor_user
  expire_in: 90              # Expire in 90 days

Generates SQL:

-- expire_in: 30, assuming current date is 2025-01-01
ALTER USER "temp_user" VALID UNTIL '2025-01-31';

expire_at

String, mutable. Expiration date in YYYY-MM-DD format, or special value infinity.

Lower priority than expire_in. Use infinity for never-expiring users.

- name: contractor_user
  expire_at: '2024-12-31'    # Expire on specific date

- name: permanent_user
  expire_at: 'infinity'      # Never expires
ALTER USER "contractor_user" VALID UNTIL '2024-12-31';
ALTER USER "permanent_user" VALID UNTIL 'infinity';

roles

Array, additive. Roles this user belongs to. Elements can be strings or objects.

Simple format - strings for role names:

- name: dbuser_app
  roles:
    - dbrole_readwrite
    - pg_read_all_data
GRANT "dbrole_readwrite" TO "dbuser_app";
GRANT "pg_read_all_data" TO "dbuser_app";

Full format - objects for fine-grained control:

- name: dbuser_app
  roles:
    - dbrole_readwrite                            # Simple string: GRANT role
    - { name: dbrole_admin, admin: true }         # WITH ADMIN OPTION
    - { name: pg_monitor, set: false }            # PG16+: disallow SET ROLE
    - { name: pg_signal_backend, inherit: false } # PG16+: don't auto-inherit
    - { name: old_role, state: absent }           # Revoke role membership

Object Format Parameters:

ParamTypeDescription
namestringRole name (required)
stateenumgrant (default) or absent/revoke: control membership
adminbooltrue: WITH ADMIN OPTION, false: REVOKE ADMIN
setboolPG16+: true: WITH SET TRUE, false: REVOKE SET
inheritboolPG16+: true: WITH INHERIT TRUE, false: REVOKE INHERIT

PostgreSQL 16+ New Features:

PostgreSQL 16 introduced finer-grained role membership control:

  • ADMIN OPTION: Allow granting role to other users
  • SET OPTION: Allow using SET ROLE to switch to this role
  • INHERIT OPTION: Auto-inherit this role’s privileges
# PostgreSQL 16+ complete example
- name: dbuser_app
  roles:
    # Normal membership
    - dbrole_readwrite

    # Can grant dbrole_admin to other users
    - { name: dbrole_admin, admin: true }

    # Cannot SET ROLE to pg_monitor (only inherit privileges)
    - { name: pg_monitor, set: false }

    # Don't auto-inherit pg_execute_server_program (need explicit SET ROLE)
    - { name: pg_execute_server_program, inherit: false }

    # Revoke old_role membership
    - { name: old_role, state: absent }

set and inherit options only work in PG16+. On earlier versions they’re ignored with warning comments.

parameters

Object, mutable. Role-level config params via ALTER ROLE ... SET. Applies to all sessions for this user.

- name: dbuser_analyst
  parameters:
    work_mem: '256MB'
    statement_timeout: '5min'
    search_path: 'analytics,public'
    log_statement: 'all'
ALTER USER "dbuser_analyst" SET "work_mem" = '256MB';
ALTER USER "dbuser_analyst" SET "statement_timeout" = '5min';
ALTER USER "dbuser_analyst" SET "search_path" = 'analytics,public';
ALTER USER "dbuser_analyst" SET "log_statement" = 'all';

Use special value DEFAULT (case-insensitive) to reset to PostgreSQL default:

- name: dbuser_app
  parameters:
    work_mem: DEFAULT          # Reset to default
    statement_timeout: '30s'   # Set new value
ALTER USER "dbuser_app" SET "work_mem" = DEFAULT;
ALTER USER "dbuser_app" SET "statement_timeout" = '30s';

Common role-level params:

ParameterDescriptionExample
work_memQuery work memory'64MB'
statement_timeoutStatement timeout'30s'
lock_timeoutLock wait timeout'10s'
idle_in_transaction_session_timeoutIdle transaction timeout'10min'
search_pathSchema search path'app,public'
log_statementLog level'ddl'
temp_file_limitTemp file size limit'10GB'

Query user-level params via pg_db_role_setting system view.

pgbouncer

Boolean, mutable. Add user to Pgbouncer user list, default false.

For prod users needing connection pool access, must explicitly set pgbouncer: true. Default false prevents accidentally exposing internal users to the pool.

# Prod user: needs connection pool
- name: dbuser_app
  password: DBUser.App
  pgbouncer: true

# Internal user: no connection pool needed
- name: dbuser_internal
  password: DBUser.Internal
  pgbouncer: false           # Default, can be omitted

Users with pgbouncer: true are added to /etc/pgbouncer/userlist.txt.

pool_mode

Enum, mutable. User-level pool mode: transaction, session, or statement. Default transaction.

ModeDescriptionUse Case
transactionReturn connection after txnMost OLTP apps, default
sessionReturn connection after sessionApps needing session state
statementReturn after each statementSimple stateless queries
# DBA user: session mode (may need SET commands etc.)
- name: dbuser_dba
  pgbouncer: true
  pool_mode: session

# Normal business user: transaction mode
- name: dbuser_app
  pgbouncer: true
  pool_mode: transaction

User-level pool params are configured via /etc/pgbouncer/useropts.txt:

dbuser_dba      = pool_mode=session max_user_connections=16
dbuser_monitor  = pool_mode=session max_user_connections=8

pool_connlimit

Integer, mutable. User-level max pool connections, default -1 (unlimited).

- name: dbuser_app
  pgbouncer: true
  pool_connlimit: 50         # Max 50 pool connections for this user

ACL System

Pigsty provides a built-in, out-of-the-box access control / ACL system. Just assign these four default roles to business users:

RolePrivilegesTypical Use Case
dbrole_readwriteGlobal read-writePrimary business prod accounts
dbrole_readonlyGlobal read-onlyOther business read-only access
dbrole_adminDDL privilegesBusiness admins, table creation
dbrole_offlineRestricted read-only (offline only)Individual users, ETL/analytics
# Typical business user configuration
pg_users:
  - name: dbuser_app
    password: DBUser.App
    pgbouncer: true
    roles: [dbrole_readwrite]    # Prod account, read-write

  - name: dbuser_readonly
    password: DBUser.Readonly
    pgbouncer: true
    roles: [dbrole_readonly]     # Read-only account

  - name: dbuser_admin
    password: DBUser.Admin
    pgbouncer: true
    roles: [dbrole_admin]        # Admin, can execute DDL

  - name: dbuser_etl
    password: DBUser.ETL
    roles: [dbrole_offline]      # Offline analytics account

To redesign your own ACL system, customize:


Pgbouncer Users

Pgbouncer is enabled by default as connection pool middleware. Pigsty adds all users in pg_users with explicit pgbouncer: true flag to the pgbouncer user list.

Users in connection pool are listed in /etc/pgbouncer/userlist.txt:

"postgres" ""
"dbuser_wiki" "SCRAM-SHA-256$4096:+77dyhrPeFDT/TptHs7/7Q==$KeatuohpKIYzHPCt/tqBu85vI11o9mar/by0hHYM2W8=:X9gig4JtjoS8Y/o1vQsIX/gY1Fns8ynTXkbWOjUfbRQ="
"dbuser_view" "SCRAM-SHA-256$4096:DFoZHU/DXsHL8MJ8regdEw==$gx9sUGgpVpdSM4o6A2R9PKAUkAsRPLhLoBDLBUYtKS0=:MujSgKe6rxcIUMv4GnyXJmV0YNbf39uFRZv724+X1FE="
"dbuser_monitor" "SCRAM-SHA-256$4096:fwU97ZMO/KR0ScHO5+UuBg==$CrNsmGrx1DkIGrtrD1Wjexb/aygzqQdirTO1oBZROPY=:L8+dJ+fqlMQh7y4PmVR/gbAOvYWOr+KINjeMZ8LlFww="
"dbuser_meta" "SCRAM-SHA-256$4096:leB2RQPcw1OIiRnPnOMUEg==$eyC+NIMKeoTxshJu314+BmbMFpCcspzI3UFZ1RYfNyU=:fJgXcykVPvOfro2MWNkl5q38oz21nSl1dTtM65uYR1Q="

User-level pool params are maintained in /etc/pgbouncer/useropts.txt:

dbuser_dba      = pool_mode=session max_user_connections=16
dbuser_monitor  = pool_mode=session max_user_connections=8

When creating users, Pgbouncer user list is refreshed via online reload - doesn’t affect existing connections.

Pgbouncer runs as same dbsu as PostgreSQL (default postgres OS user). Use pgb alias to access pgbouncer admin functions.

pgbouncer_auth_query param allows dynamic query for pool user auth - convenient when you prefer not to manually manage pool users.


For user management operations, see User Management.

For user access privileges, see ACL: Role Privileges.

10.2.5 - Database

How to define and customize PostgreSQL databases through configuration?

In this document, “database” refers to a logical object within a database cluster created with CREATE DATABASE.

A PostgreSQL cluster can serve multiple databases simultaneously. In Pigsty, you can define required databases in cluster configuration.

Pigsty customizes the template1 template database - creating default schemas, installing default extensions, configuring default privileges. Newly created databases inherit these settings from template1. You can also specify other template databases via template for instant database cloning.

By default, all business databases are 1:1 added to Pgbouncer connection pool; pg_exporter auto-discovers all business databases for in-database object monitoring. All databases are also registered as PostgreSQL datasources in Grafana on all INFRA nodes for PGCAT dashboards.


Define Database

Business databases are defined in cluster param pg_databases, an array of database definition objects. During cluster initialization, databases are created in definition order, so later databases can use earlier ones as templates.

Example from Pigsty demo pg-meta cluster:

pg-meta:
  hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
  vars:
    pg_cluster: pg-meta
    pg_databases:
      - { name: meta ,baseline: cmdb.sql ,comment: pigsty meta database ,schemas: [pigsty] ,extensions: [{name: postgis, schema: public}, {name: timescaledb}]}
      - { name: grafana  ,owner: dbuser_grafana  ,revokeconn: true ,comment: grafana primary database }
      - { name: bytebase ,owner: dbuser_bytebase ,revokeconn: true ,comment: bytebase primary database }
      - { name: kong     ,owner: dbuser_kong     ,revokeconn: true ,comment: kong the api gateway database }
      - { name: gitea    ,owner: dbuser_gitea    ,revokeconn: true ,comment: gitea meta database }
      - { name: wiki     ,owner: dbuser_wiki     ,revokeconn: true ,comment: wiki meta database }
      - { name: noco     ,owner: dbuser_noco     ,revokeconn: true ,comment: nocodb database }

Each database definition is a complex object with fields below. Only name is required:

- name: meta                      # REQUIRED, `name` is the only mandatory field
  state: create                   # Optional, database state: create (default), absent, recreate
  baseline: cmdb.sql              # Optional, SQL baseline file path (relative to Ansible search path, e.g., files/)
  pgbouncer: true                 # Optional, add to pgbouncer database list? default true
  schemas: [pigsty]               # Optional, additional schemas to create, array of schema names
  extensions:                     # Optional, extensions to install: array of extension objects
    - { name: postgis , schema: public }  # Can specify schema, or omit (installs to first schema in search_path)
    - { name: timescaledb }               # Some extensions create and use fixed schemas
  comment: pigsty meta database   # Optional, database comment/description
  owner: postgres                 # Optional, database owner, defaults to current user
  template: template1             # Optional, template to use, default template1
  strategy: FILE_COPY             # Optional, clone strategy: FILE_COPY or WAL_LOG (PG15+)
  encoding: UTF8                  # Optional, inherits from template/cluster config (UTF8)
  locale: C                       # Optional, inherits from template/cluster config (C)
  lc_collate: C                   # Optional, inherits from template/cluster config (C)
  lc_ctype: C                     # Optional, inherits from template/cluster config (C)
  locale_provider: libc           # Optional, locale provider: libc, icu, builtin (PG15+)
  icu_locale: en-US               # Optional, ICU locale rules (PG15+)
  icu_rules: ''                   # Optional, ICU collation rules (PG16+)
  builtin_locale: C.UTF-8         # Optional, builtin locale provider rules (PG17+)
  tablespace: pg_default          # Optional, default tablespace
  is_template: false              # Optional, mark as template database
  allowconn: true                 # Optional, allow connections, default true
  revokeconn: false               # Optional, revoke public CONNECT privilege, default false
  register_datasource: true       # Optional, register to grafana datasource? default true
  connlimit: -1                   # Optional, connection limit, -1 means unlimited
  parameters:                     # Optional, database-level params via ALTER DATABASE SET
    work_mem: '64MB'
    statement_timeout: '30s'
  pool_auth_user: dbuser_meta     # Optional, auth user for pgbouncer auth_query
  pool_mode: transaction          # Optional, database-level pgbouncer pool mode
  pool_size: 64                   # Optional, database-level pgbouncer default pool size
  pool_reserve: 32                # Optional, database-level pgbouncer reserve pool
  pool_size_min: 0                # Optional, database-level pgbouncer min pool size
  pool_connlimit: 100             # Optional, database-level max database connections

Parameter Overview

The only required field is name - a valid, unique database name within the cluster. All other params have sensible defaults. Parameters marked “Immutable” only take effect at creation; changing them requires database recreation.

FieldCategoryTypeAttrDescription
nameBasicstringRequiredDatabase name, must be valid and unique
stateBasicenumOptionalState: create (default), absent, recreate
ownerBasicstringMutableDatabase owner, defaults to postgres
commentBasicstringMutableDatabase comment
templateTemplatestringImmutableTemplate database, default template1
strategyTemplateenumImmutableClone strategy: FILE_COPY or WAL_LOG (PG15+)
encodingEncodingstringImmutableCharacter encoding, default inherited (UTF8)
localeEncodingstringImmutableLocale setting, default inherited (C)
lc_collateEncodingstringImmutableCollation rule, default inherited (C)
lc_ctypeEncodingstringImmutableCharacter classification, default inherited (C)
locale_providerEncodingenumImmutableLocale provider: libc, icu, builtin (PG15+)
icu_localeEncodingstringImmutableICU locale rules (PG15+)
icu_rulesEncodingstringImmutableICU collation customization (PG16+)
builtin_localeEncodingstringImmutableBuiltin locale rules (PG17+)
tablespaceStoragestringMutableDefault tablespace, change triggers data migration
is_templatePrivilegeboolMutableMark as template database
allowconnPrivilegeboolMutableAllow connections, default true
revokeconnPrivilegeboolMutableRevoke PUBLIC CONNECT privilege
connlimitPrivilegeintMutableConnection limit, -1 for unlimited
baselineInitstringMutableSQL baseline file path, runs only on first create
schemasInit(string|object)[]MutableSchema definitions to create
extensionsInit(string|object)[]MutableExtension definitions to install
parametersInitobjectMutableDatabase-level parameters
pgbouncerPoolboolMutableAdd to connection pool, default true
pool_modePoolenumMutablePool mode: transaction (default)
pool_sizePoolintMutableDefault pool size, default 64
pool_size_minPoolintMutableMin pool size, default 0
pool_reservePoolintMutableReserve pool size, default 32
pool_connlimitPoolintMutableMax database connections, default 100
pool_auth_userPoolstringMutableAuth query user
register_datasourceMonitorboolMutableRegister to Grafana datasource, default true

Parameter Details

name

String, required. Database name - must be unique within the cluster.

Must be a valid PostgreSQL identifier: max 63 chars, no SQL keywords, starts with letter or underscore, followed by letters, digits, or underscores. Must match: ^[A-Za-z_][A-Za-z0-9_$]{0,62}$

- name: myapp              # Simple naming
- name: my_application     # Underscore separated
- name: app_v2             # Version included

state

Enum for database operation: create, absent, or recreate. Default create.

StateDescription
createDefault, create or modify database, adjust mutable params if exists
absentDelete database with DROP DATABASE WITH (FORCE)
recreateDrop then create, for database reset
- name: myapp                # state defaults to create
- name: olddb
  state: absent              # Delete database
- name: testdb
  state: recreate            # Rebuild database

owner

String. Database owner, defaults to pg_dbsu (postgres) if not specified.

Target user must exist. Changing owner executes (old owner retains existing privileges):

Database owner has full control including creating schemas, tables, extensions - useful for multi-tenant scenarios.

ALTER DATABASE "myapp" OWNER TO "new_owner";
GRANT ALL PRIVILEGES ON DATABASE "myapp" TO "new_owner";

comment

String. Database comment, defaults to business database {name}.

Set via COMMENT ON DATABASE, supports Chinese and special chars (Pigsty auto-escapes quotes). Stored in pg_database.datacl, viewable via \l+.

COMMENT ON DATABASE "myapp" IS 'my main application database';
- name: myapp
  comment: my main application database

template

String, immutable. Template database for creation, default template1.

PostgreSQL’s CREATE DATABASE clones the template - new database inherits all objects, extensions, schemas, permissions. Pigsty customizes template1 during cluster init, so new databases inherit these settings.

TemplateDescription
template1Default, includes Pigsty pre-configured extensions/schemas/perms
template0Clean template, required for non-default locale providers
Custom databaseUse existing database as template for cloning

When using icu or builtin locale provider, must specify template: template0 since template1 locale settings can’t be overridden.

- name: myapp_icu
  template: template0        # Required for ICU
  locale_provider: icu
  icu_locale: zh-Hans

Using template0 skips monitoring extensions/schemas and default privileges - allowing fully custom database.

strategy

Enum, immutable. Clone strategy: FILE_COPY or WAL_LOG. Available PG15+.

StrategyDescriptionUse Case
FILE_COPYDirect file copy, PG15+ defaultLarge templates, general
WAL_LOGClone via WAL loggingSmall templates, non-blocking

WAL_LOG doesn’t block template connections during clone but less efficient for large templates. Ignored on PG14 and earlier.

- name: cloned_db
  template: source_db
  strategy: WAL_LOG          # WAL-based cloning

encoding

String, immutable. Character encoding, inherits from template if unspecified (usually UTF8).

Strongly recommend UTF8 unless special requirements. Cannot be changed after creation.

- name: legacy_db
  template: template0        # Use template0 for non-default encoding
  encoding: LATIN1

locale

String, immutable. Locale setting - sets both lc_collate and lc_ctype. Inherits from template (usually C).

Determines string sort order and character classification. Use C or POSIX for best performance and cross-platform consistency; use language-specific locales (e.g., zh_CN.UTF-8) for proper language sorting.

- name: chinese_db
  template: template0
  locale: zh_CN.UTF-8        # Chinese locale
  encoding: UTF8

lc_collate

String, immutable. String collation rule. Inherits from template (usually C).

Determines ORDER BY and comparison results. Common values: C (byte order, fastest), C.UTF-8, en_US.UTF-8, zh_CN.UTF-8. Cannot be changed after creation.

- name: myapp
  template: template0
  lc_collate: en_US.UTF-8    # English collation
  lc_ctype: en_US.UTF-8

lc_ctype

String, immutable. Character classification rule for upper/lower case, digits, letters. Inherits from template (usually C).

Affects upper(), lower(), regex \w, etc. Cannot be changed after creation.

locale_provider

Enum, immutable. Locale implementation provider: libc, icu, or builtin. Available PG15+, default libc.

ProviderVersionDescription
libc-OS C library, traditional default, varies by system
icuPG15+ICU library, cross-platform consistent, more langs
builtinPG17+PostgreSQL builtin, most efficient, C/C.UTF-8 only

Using icu or builtin requires template: template0 with corresponding icu_locale or builtin_locale.

- name: fast_db
  template: template0
  locale_provider: builtin   # Builtin provider, most efficient
  builtin_locale: C.UTF-8

icu_locale

String, immutable. ICU locale identifier. Available PG15+ when locale_provider: icu.

ICU identifiers follow BCP 47. Common values:

ValueDescription
en-USUS English
en-GBBritish English
zh-HansSimplified Chinese
zh-HantTraditional Chinese
ja-JPJapanese
ko-KRKorean
- name: chinese_app
  template: template0
  locale_provider: icu
  icu_locale: zh-Hans        # Simplified Chinese ICU collation
  encoding: UTF8

icu_rules

String, immutable. Custom ICU collation rules. Available PG16+.

Allows fine-tuning default sort behavior using ICU Collation Customization.

- name: custom_sort_db
  template: template0
  locale_provider: icu
  icu_locale: en-US
  icu_rules: '&V << w <<< W'  # Custom V/W sort order

builtin_locale

String, immutable. Builtin locale provider rules. Available PG17+ when locale_provider: builtin. Values: C or C.UTF-8.

builtin provider is PG17’s new builtin implementation - faster than libc with consistent cross-platform behavior. Suitable for C/C.UTF-8 collation only.

- name: fast_db
  template: template0
  locale_provider: builtin
  builtin_locale: C.UTF-8    # Builtin UTF-8 support
  encoding: UTF8

tablespace

String, mutable. Default tablespace, default pg_default.

Changing tablespace triggers physical data migration - PostgreSQL moves all objects to new tablespace. Can take long time for large databases, use cautiously.

- name: archive_db
  tablespace: slow_hdd       # Archive data on slow storage
ALTER DATABASE "archive_db" SET TABLESPACE "slow_hdd";

is_template

Boolean, mutable. Mark database as template, default false.

When true, any user with CREATEDB privilege can use this database as template for cloning. Template databases typically pre-install standard schemas, extensions, and data.

- name: app_template
  is_template: true          # Mark as template, allow user cloning
  schemas: [core, api]
  extensions: [postgis, pg_trgm]

Deleting is_template: true databases: Pigsty first executes ALTER DATABASE ... IS_TEMPLATE false then drops.

allowconn

Boolean, mutable. Allow connections, default true.

Setting false completely disables connections at database level - no user (including superuser) can connect. Used for maintenance or archival purposes.

- name: archive_db
  allowconn: false           # Disallow all connections
ALTER DATABASE "archive_db" ALLOW_CONNECTIONS false;

revokeconn

Boolean, mutable. Revoke PUBLIC CONNECT privilege, default false.

When true, Pigsty executes:

  • Revoke PUBLIC CONNECT, regular users can’t connect
  • Grant connect to replication user (replicator) and monitor user (dbuser_monitor)
  • Grant connect to admin user (dbuser_dba) and owner with WITH GRANT OPTION

Setting false restores PUBLIC CONNECT privilege.

- name: secure_db
  owner: dbuser_secure
  revokeconn: true           # Revoke public connect, only specified users

connlimit

Integer, mutable. Max concurrent connections, default -1 (unlimited).

Positive integer limits max simultaneous sessions. Doesn’t affect superusers.

- name: limited_db
  connlimit: 50              # Max 50 concurrent connections
ALTER DATABASE "limited_db" CONNECTION LIMIT 50;

baseline

String, one-time. SQL baseline file path executed after database creation.

Baseline files typically contain schema definitions, initial data, stored procedures. Path is relative to Ansible search path, usually in files/.

Baseline runs only on first creation; skipped if database exists. state: recreate re-runs baseline.

- name: myapp
  baseline: myapp_schema.sql  # Looks for files/myapp_schema.sql

schemas

Array, mutable (add/remove). Schema definitions to create or drop. Elements can be strings or objects.

Simple format - strings for schema names (create only):

schemas:
  - app
  - api
  - core

Full format - objects for owner and drop operations:

schemas:
  - name: app                # Schema name (required)
    owner: dbuser_app        # Schema owner (optional), generates AUTHORIZATION clause
  - name: deprecated
    state: absent            # Drop schema (CASCADE)

Create uses IF NOT EXISTS; drop uses CASCADE (deletes all objects in schema).

CREATE SCHEMA IF NOT EXISTS "app" AUTHORIZATION "dbuser_app";
DROP SCHEMA IF EXISTS "deprecated" CASCADE;

extensions

Array, mutable (add/remove). Extension definitions to install or uninstall. Elements can be strings or objects.

Simple format - strings for extension names (install only):

extensions:
  - postgis
  - pg_trgm
  - vector

Full format - objects for schema, version, and uninstall:

extensions:
  - name: vector             # Extension name (required)
    schema: public           # Install to schema (optional)
    version: '0.5.1'         # Specific version (optional)
  - name: old_extension
    state: absent            # Uninstall extension (CASCADE)

Install uses CASCADE to auto-install dependencies; uninstall uses CASCADE (deletes dependent objects).

CREATE EXTENSION IF NOT EXISTS "vector" WITH SCHEMA "public" VERSION '0.5.1' CASCADE;
DROP EXTENSION IF EXISTS "old_extension" CASCADE;

parameters

Object, mutable. Database-level config params via ALTER DATABASE ... SET. Applies to all sessions connecting to this database.

- name: analytics
  parameters:
    work_mem: '256MB'
    maintenance_work_mem: '512MB'
    statement_timeout: '5min'
    search_path: 'analytics,public'

Use special value DEFAULT (case-insensitive) to reset to PostgreSQL default:

parameters:
  work_mem: DEFAULT          # Reset to default
  statement_timeout: '30s'   # Set new value
ALTER DATABASE "myapp" SET "work_mem" = DEFAULT;
ALTER DATABASE "myapp" SET "statement_timeout" = '30s';

pgbouncer

Boolean, mutable. Add database to Pgbouncer pool list, default true.

Setting false excludes database from Pgbouncer - clients can’t access via connection pool. For internal management databases or direct-connect scenarios.

- name: internal_db
  pgbouncer: false           # No connection pool access

pool_mode

Enum, mutable. Pgbouncer pool mode: transaction, session, or statement. Default transaction.

ModeDescriptionUse Case
transactionReturn connection after txnMost OLTP apps, default
sessionReturn connection after sessionApps needing session state
statementReturn after each statementSimple stateless queries
- name: session_app
  pool_mode: session         # Session-level pooling

pool_size

Integer, mutable. Pgbouncer default pool size, default 64.

Pool size determines backend connections reserved for this database. Adjust based on workload.

- name: high_load_db
  pool_size: 128             # Larger pool for high load

pool_size_min

Integer, mutable. Pgbouncer minimum pool size, default 0.

Values > 0 pre-create specified backend connections for connection warming, reducing first-request latency.

- name: latency_sensitive
  pool_size_min: 10          # Pre-warm 10 connections

pool_reserve

Integer, mutable. Pgbouncer reserve pool size, default 32.

When default pool exhausted, Pgbouncer can allocate up to pool_reserve additional connections for burst traffic.

- name: bursty_db
  pool_size: 64
  pool_reserve: 64           # Allow burst to 128 connections

pool_connlimit

Integer, mutable. Max connections via Pgbouncer pool, default 100.

This is Pgbouncer-level limit, independent of database’s connlimit param.

- name: limited_pool_db
  pool_connlimit: 50         # Pool max 50 connections

pool_auth_user

String, mutable. User for Pgbouncer auth query.

Requires pgbouncer_auth_query enabled. When set, all Pgbouncer connections to this database use specified user for auth query password verification.

- name: myapp
  pool_auth_user: dbuser_monitor  # Use monitor user for auth query

register_datasource

Boolean, mutable. Register database to Grafana as PostgreSQL datasource, default true.

Set false to skip Grafana registration. For temp databases, test databases, or internal databases not needed in monitoring.

- name: temp_db
  register_datasource: false  # Don't register to Grafana

Template Inheritance

Many parameters inherit from template database if not explicitly specified. Default template is template1, whose encoding settings are determined by cluster init params:

Cluster ParamDefaultDescription
pg_encodingUTF8Cluster encoding
pg_localeC / C-UTF-8 (if supported)Cluster locale
pg_lc_collateC / C-UTF-8 (if supported)Cluster collation
pg_lc_ctypeC / C-UTF-8 (if supported)Cluster ctype

New databases fork from template1, which is customized during PG_PROVISION with extensions, schemas, and default privileges. Unless you explicitly use another template.


Deep Customization

Pigsty provides rich customization params. To customize template database, refer to:

If above configurations don’t meet your needs, use pg_init to specify custom cluster init scripts:


Locale Providers

PostgreSQL 15+ introduced locale_provider for different locale implementations. These are immutable after creation.

Pigsty’s configure wizard selects builtin C.UTF-8/C locale provider based on PG and OS versions. Databases inherit cluster locale by default. To specify different locale provider, you must use template0.

Using ICU provider (PG15+):

- name: myapp_icu
  template: template0        # ICU requires template0
  locale_provider: icu
  icu_locale: en-US          # ICU locale rules
  encoding: UTF8

Using builtin provider (PG17+):

- name: myapp_builtin
  template: template0
  locale_provider: builtin
  builtin_locale: C.UTF-8    # Builtin locale rules
  encoding: UTF8

Provider comparison: libc (traditional, OS-dependent), icu (PG15+, cross-platform, feature-rich), builtin (PG17+, most efficient C/C.UTF-8).


Connection Pool

Pgbouncer connection pool optimizes short-connection performance, reduces contention, prevents excessive connections from overwhelming database, and provides flexibility during migrations.

Pigsty configures 1:1 connection pool for each PostgreSQL instance, running as same pg_dbsu (default postgres OS user). Pool communicates with database via /var/run/postgresql Unix socket.

Pigsty adds all databases in pg_databases to pgbouncer by default. Set pgbouncer: false to exclude specific databases. Pgbouncer database list and config params are defined in /etc/pgbouncer/database.txt:

meta                        = host=/var/run/postgresql mode=session
grafana                     = host=/var/run/postgresql mode=transaction
bytebase                    = host=/var/run/postgresql auth_user=dbuser_meta
kong                        = host=/var/run/postgresql pool_size=32 reserve_pool=64
gitea                       = host=/var/run/postgresql min_pool_size=10
wiki                        = host=/var/run/postgresql
noco                        = host=/var/run/postgresql
mongo                       = host=/var/run/postgresql

When creating databases, Pgbouncer database list is refreshed via online reload - doesn’t affect existing connections.

10.2.6 - HBA Rules

Detailed explanation of PostgreSQL and Pgbouncer Host-Based Authentication (HBA) rules configuration in Pigsty.

Overview

HBA (Host-Based Authentication) controls “who can connect to the database from where and how”. Pigsty manages HBA rules declaratively through pg_default_hba_rules and pg_hba_rules.

Pigsty renders the following config files during cluster init or HBA refresh:

Config FilePathDescription
PostgreSQL HBA/pg/data/pg_hba.confPostgreSQL server HBA rules
Pgbouncer HBA/etc/pgbouncer/pgb_hba.confConnection pool HBA rules

HBA rules are controlled by these parameters:

ParameterLevelDescription
pg_default_hba_rulesGPostgreSQL global default HBA
pg_hba_rulesG/C/IPostgreSQL cluster/instance add
pgb_default_hba_rulesGPgbouncer global default HBA
pgb_hba_rulesG/C/IPgbouncer cluster/instance add

Rule features:

  • Role filtering: Rules support role field, auto-filter based on instance’s pg_role
  • Order sorting: Rules support order field, controls position in final config file
  • Two syntaxes: Supports alias form (simplified) and raw form (direct HBA text)

Refresh HBA

After modifying config, re-render config files and reload services:

bin/pgsql-hba <cls>                   # Refresh entire cluster HBA (recommended)
bin/pgsql-hba <cls> <ip>...           # Refresh specific instances in cluster

Script executes the following playbook:

./pgsql.yml -l <cls> -t pg_hba,pg_reload,pgbouncer_hba,pgbouncer_reload -e pg_reload=true

PostgreSQL only: ./pgsql.yml -l <cls> -t pg_hba,pg_reload -e pg_reload=true

Pgbouncer only: ./pgsql.yml -l <cls> -t pgbouncer_hba,pgbouncer_reload


Parameter Details

pg_default_hba_rules

PostgreSQL global default HBA rule list, usually defined in all.vars, provides base access control for all clusters.

  • Type: rule[], Level: Global (G)
pg_default_hba_rules:
  - {user: '${dbsu}'    ,db: all         ,addr: local     ,auth: ident ,title: 'dbsu access via local os user ident'  ,order: 100}
  - {user: '${dbsu}'    ,db: replication ,addr: local     ,auth: ident ,title: 'dbsu replication from local os ident' ,order: 150}
  - {user: '${repl}'    ,db: replication ,addr: localhost ,auth: pwd   ,title: 'replicator replication from localhost',order: 200}
  - {user: '${repl}'    ,db: replication ,addr: intra     ,auth: pwd   ,title: 'replicator replication from intranet' ,order: 250}
  - {user: '${repl}'    ,db: postgres    ,addr: intra     ,auth: pwd   ,title: 'replicator postgres db from intranet' ,order: 300}
  - {user: '${monitor}' ,db: all         ,addr: localhost ,auth: pwd   ,title: 'monitor from localhost with password' ,order: 350}
  - {user: '${monitor}' ,db: all         ,addr: infra     ,auth: pwd   ,title: 'monitor from infra host with password',order: 400}
  - {user: '${admin}'   ,db: all         ,addr: infra     ,auth: ssl   ,title: 'admin @ infra nodes with pwd & ssl'   ,order: 450}
  - {user: '${admin}'   ,db: all         ,addr: world     ,auth: ssl   ,title: 'admin @ everywhere with ssl & pwd'    ,order: 500}
  - {user: '+dbrole_readonly',db: all    ,addr: localhost ,auth: pwd   ,title: 'pgbouncer read/write via local socket',order: 550}
  - {user: '+dbrole_readonly',db: all    ,addr: intra     ,auth: pwd   ,title: 'read/write biz user via password'     ,order: 600}
  - {user: '+dbrole_offline' ,db: all    ,addr: intra     ,auth: pwd   ,title: 'allow etl offline tasks from intranet',order: 650}

pg_hba_rules

PostgreSQL cluster/instance-level additional HBA rules, can override at cluster or instance level, merged with default rules and sorted by order.

  • Type: rule[], Level: Global/Cluster/Instance (G/C/I), Default: []
pg_hba_rules:
  - {user: app_user, db: app_db, addr: intra, auth: pwd, title: 'app user access'}

pgb_default_hba_rules

Pgbouncer global default HBA rule list, usually defined in all.vars.

  • Type: rule[], Level: Global (G)
pgb_default_hba_rules:
  - {user: '${dbsu}'    ,db: pgbouncer   ,addr: local     ,auth: peer  ,title: 'dbsu local admin access with os ident',order: 100}
  - {user: 'all'        ,db: all         ,addr: localhost ,auth: pwd   ,title: 'allow all user local access with pwd' ,order: 150}
  - {user: '${monitor}' ,db: pgbouncer   ,addr: intra     ,auth: pwd   ,title: 'monitor access via intranet with pwd' ,order: 200}
  - {user: '${monitor}' ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other monitor access addr' ,order: 250}
  - {user: '${admin}'   ,db: all         ,addr: intra     ,auth: pwd   ,title: 'admin access via intranet with pwd'   ,order: 300}
  - {user: '${admin}'   ,db: all         ,addr: world     ,auth: deny  ,title: 'reject all other admin access addr'   ,order: 350}
  - {user: 'all'        ,db: all         ,addr: intra     ,auth: pwd   ,title: 'allow all user intra access with pwd' ,order: 400}

pgb_hba_rules

Pgbouncer cluster/instance-level additional HBA rules.

  • Type: rule[], Level: Global/Cluster/Instance (G/C/I), Default: []

Note: Pgbouncer HBA does not support db: replication.


Rule Fields

Each HBA rule is a YAML dict supporting these fields:

FieldTypeRequiredDefaultDescription
userstringNoallUsername, supports all, placeholders, +rolename
dbstringNoallDatabase name, supports all, replication, db name
addrstringYes*-Address alias or CIDR, see Address Aliases
authstringNopwdAuth method alias, see Auth Methods
titlestringNo-Rule description, rendered as comment in config
rolestringNocommonInstance role filter, see Role Filtering
orderintNo1000Sort weight, lower first, see Order Sorting
ruleslistYes*-Raw HBA text lines, mutually exclusive with addr

Either addr or rules must be specified. Use rules to write raw HBA format directly.


Address Aliases

Pigsty provides address aliases to simplify HBA rule writing:

AliasExpands ToDescription
localUnix socketLocal Unix socket
localhostUnix socket + 127.0.0.1/32 + ::1/128Loopback addresses
admin${admin_ip}/32Admin IP address
infraAll infra group node IPsInfrastructure nodes
clusterAll current cluster member IPsSame cluster instances
intra / intranet10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16Intranet CIDRs
world / all0.0.0.0/0 + ::/0Any address (IPv4 + IPv6)
<CIDR>Direct usee.g., 192.168.1.0/24

Intranet CIDRs can be customized via node_firewall_intranet:

node_firewall_intranet:
  - 10.0.0.0/8
  - 172.16.0.0/12
  - 192.168.0.0/16

Auth Methods

Pigsty provides auth method aliases for simplified config:

AliasActual MethodConnection TypeDescription
pwdscram-sha-256 or md5hostAuto-select based on pg_pwd_enc
sslscram-sha-256 or md5hostsslForce SSL + password
ssl-shascram-sha-256hostsslForce SSL + SCRAM-SHA-256
ssl-md5md5hostsslForce SSL + MD5
certcerthostsslClient certificate auth
trusttrusthostUnconditional trust (dangerous)
deny / rejectrejecthostReject connection
identidenthostOS user mapping (PostgreSQL)
peerpeerlocalOS user mapping (Pgbouncer/local)

pg_pwd_enc defaults to scram-sha-256, can be set to md5 for legacy client compatibility.


User Variables

HBA rules support these user placeholders, auto-replaced with actual usernames during rendering:

PlaceholderDefaultCorresponding Param
${dbsu}postgrespg_dbsu
${repl}replicatorpg_replication_username
${monitor}dbuser_monitorpg_m