This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Administration

Infrastructure components and INFRA cluster administration SOP: create, destroy, scale out, scale in, certificates, repositories…

This section covers daily administration and operations for Pigsty deployments.


Create INFRA Module

Use infra.yml playbook to install INFRA module on infra group:

./infra.yml     # Install INFRA module on infra group

Uninstall INFRA Module

Use dedicated infra-rm.yml playbook to remove INFRA module from infra group:

./infra-rm.yml  # Remove INFRA module from infra group

Manage Local Repository

Pigsty includes local yum/apt repo for software packages. Manage repo configuration:

Repo Variables

VariableDescription
repo_enabledEnable local repo on node
repo_upstreamUpstream repos to include
repo_removeRemove upstream repos if true
repo_url_pkgExtra packages to download
repo_cleanClean repo cache (makecache)
repo_pkgPackages to include

Repo Tasks

./infra.yml -t repo              # Create or update repo

Repo location: /www/pigsty served by Nginx.

More: Configuration: INFRA - REPO

1 - Ansible

Using Ansible to run administration commands

Ansible is installed by default on all INFRA nodes and can be used to manage the entire deployment.

Pigsty implements automation based on Ansible, following the Infrastructure-as-Code philosophy.

Ansible knowledge is useful for managing databases and infrastructure, but not required. You only need to know how to execute Playbooks - YAML files that define a series of automated tasks.


Installation

Pigsty automatically installs ansible and its dependencies during the bootstrap process. For manual installation, use the following commands:

# Debian / Ubuntu
sudo apt install -y ansible python3-jmespath

# EL 10
sudo dnf install -y ansible python-jmespath

# EL 8/9
sudo dnf install -y ansible python3.12-jmespath

# EL 7
sudo yum install -y ansible python-jmespath

macOS

macOS users can install using Homebrew:

brew install ansible
pip3 install jmespath

Basic Usage

To run a playbook, simply execute ./path/to/playbook.yml. Here are the most commonly used Ansible command-line parameters:

PurposeParameterDescription
Where-l / --limit <pattern>Limit target hosts/groups/patterns
What-t / --tags <tags>Only run tasks with specified tags
How-e / --extra-vars <vars>Pass extra command-line variables
Config-i / --inventory <path>Specify inventory file path

Limiting Hosts

Use -l|--limit <pattern> to limit execution to specific groups, hosts, or patterns:

./node.yml                      # Execute on all nodes
./pgsql.yml -l pg-test          # Only execute on pg-test cluster
./pgsql.yml -l pg-*             # Execute on all clusters starting with pg-
./pgsql.yml -l 10.10.10.10      # Only execute on specific IP host

Running playbooks without host limits can be very dangerous! By default, most playbooks execute on all hosts. Use with caution!


Limiting Tasks

Use -t|--tags <tags> to only execute task subsets with specified tags:

./infra.yml -t repo           # Only execute tasks to create local repo
./infra.yml -t repo_upstream  # Only execute tasks to add upstream repos
./node.yml -t node_pkg        # Only execute tasks to install node packages
./pgsql.yml -t pg_hba         # Only execute tasks to render pg_hba.conf

Passing Variables

Use -e|--extra-vars <key=value> to override variables at runtime:

./pgsql.yml -e pg_clean=true         # Force clean existing PG instances
./pgsql-rm.yml -e pg_rm_pkg=false    # Keep packages when uninstalling
./node.yml -e '{"node_tune":"tiny"}' # Pass variables in JSON format
./pgsql.yml -e @/path/to/config.yml  # Load variables from YAML file

Specifying Inventory

By default, Ansible uses pigsty.yml in the current directory as the inventory. Use -i|--inventory <path> to specify a different config file:

./pgsql.yml -i files/pigsty/full.yml -l pg-test

[!NOTE]

To permanently change the default config file path, modify the inventory parameter in ansible.cfg.

2 - Playbooks

Built-in Ansible playbooks in Pigsty

Pigsty uses idempotent Ansible playbooks for management and control. Running playbooks requires ansible-playbook to be in the system PATH; users must first install Ansible before executing playbooks.

Available Playbooks

ModulePlaybookPurpose
INFRAinstall.ymlOne-click Pigsty installation
INFRAinfra.ymlInitialize Pigsty infrastructure on infra nodes
INFRAinfra-rm.ymlRemove infrastructure components from infra nodes
INFRAcache.ymlCreate offline installation packages from target nodes
INFRAcert.ymlIssue certificates using Pigsty self-signed CA
NODEnode.ymlInitialize nodes, configure to desired state
NODEnode-rm.ymlRemove nodes from Pigsty
PGSQLpgsql.ymlInitialize HA PostgreSQL cluster, or add new replica
PGSQLpgsql-rm.ymlRemove PostgreSQL cluster, or remove replica
PGSQLpgsql-db.ymlAdd new business database to existing cluster
PGSQLpgsql-user.ymlAdd new business user to existing cluster
PGSQLpgsql-pitr.ymlPerform point-in-time recovery (PITR) on cluster
PGSQLpgsql-monitor.ymlMonitor remote PostgreSQL using local exporters
PGSQLpgsql-migration.ymlGenerate migration manual and scripts for PostgreSQL
PGSQLslim.ymlInstall Pigsty with minimal components
REDISredis.ymlInitialize Redis cluster/node/instance
REDISredis-rm.ymlRemove Redis cluster/node/instance
ETCDetcd.ymlInitialize ETCD cluster, or add new member
ETCDetcd-rm.ymlRemove ETCD cluster, or remove existing member
MINIOminio.ymlInitialize MinIO cluster
MINIOminio-rm.ymlRemove MinIO cluster
DOCKERdocker.ymlInstall Docker on nodes
DOCKERapp.ymlInstall applications using Docker Compose
FERRETmongo.ymlInstall Mongo/FerretDB on nodes

Deployment Strategy

The install.yml playbook orchestrates specialized playbooks in the following group order for complete deployment:

  • infra: infra.yml (-l infra)
  • nodes: node.yml
  • etcd: etcd.yml (-l etcd)
  • minio: minio.yml (-l minio)
  • pgsql: pgsql.yml

Circular Dependency Note: There is a weak circular dependency between NODE and INFRA: to register NODE to INFRA, INFRA must already exist; while INFRA module depends on NODE to work. The solution is to initialize infra nodes first, then add other nodes. To complete all deployment at once, use install.yml.


Safety Notes

Most playbooks are idempotent, which means some deployment playbooks may wipe existing databases and create new ones when protection options are not enabled. Use extra caution with pgsql, minio, and infra playbooks. Read the documentation carefully and proceed with caution.

Best Practices

  1. Read playbook documentation carefully before execution
  2. Press Ctrl-C immediately to stop when anomalies occur
  3. Test in non-production environments first
  4. Use -l parameter to limit target hosts, avoiding unintended hosts
  5. Use -t parameter to specify tags, executing only specific tasks

Dry-Run Mode

Use --check --diff options to preview changes without actually executing:

# Preview changes without execution
./pgsql.yml -l pg-test --check --diff

# Check specific tasks with tags
./pgsql.yml -l pg-test -t pg_config --check --diff

3 - Nginx Management

Nginx management, web portal configuration, web server, upstream services

Pigsty installs Nginx on INFRA nodes as the entry point for all web services, listening on standard ports 80/443.

In Pigsty, you can configure Nginx to provide various services through inventory:

  • Expose web interfaces for monitoring components like Grafana, VictoriaMetrics (VMUI), Alertmanager, and VictoriaLogs
  • Serve static files (software repos, documentation sites, websites, etc.)
  • Proxy custom application services (internal apps, database management UIs, Docker application interfaces, etc.)
  • Automatically issue self-signed HTTPS certificates, or use Certbot to obtain free Let’s Encrypt certificates
  • Expose services through a single port using different subdomains for unified access

Basic Configuration

Customize Nginx behavior via infra_portal parameter:

infra_portal:
  home: { domain: i.pigsty }

infra_portal is a dictionary where each key defines a service and the value is the service configuration. Only services with a domain defined will generate corresponding Nginx config files.

  • home: Special default server for homepage and built-in monitoring component reverse proxies
  • Proxy services: Specify upstream service address via endpoint for reverse proxy
  • Static services: Specify local directory via path for static file serving

Server Parameters

Basic Parameters

ParameterDescription
domainOptional proxy domain
endpointUpstream service address (IP:PORT or socket)
pathLocal directory for static content
schemeProtocol type (http/https), default http
domainsAdditional domain list (aliases)

SSL/TLS Options

ParameterDescription
certbotEnable Let’s Encrypt cert management, value is cert name
certCustom certificate file path
keyCustom private key file path
enforce_httpsForce HTTPS redirect (301)

Advanced Settings

ParameterDescription
configCustom Nginx config snippet
indexEnable directory listing (for static)
logCustom log file name
websocketEnable WebSocket support
authEnable Basic Auth
realmBasic Auth prompt message

Configuration Examples

Reverse Proxy Services

grafana: { domain: g.pigsty, endpoint: "${admin_ip}:3000", websocket: true }
pgadmin: { domain: adm.pigsty, endpoint: "127.0.0.1:8885" }

Static Files and Directory Listing

repo: { domain: repo.pigsty.io, path: "/www/repo", index: true }

Custom SSL Certificate

secure_app:
  domain: secure.pigsty.io
  endpoint: "${admin_ip}:8443"
  cert: "/etc/ssl/certs/custom.crt"
  key: "/etc/ssl/private/custom.key"

Using Let’s Encrypt Certificates

grafana:
  domain: demo.pigsty.io
  endpoint: "${admin_ip}:3000"
  websocket: true
  certbot: pigsty.demo    # Cert name, multiple domains can share one cert

Force HTTPS Redirect

web.io:
  domain: en.pigsty.io
  path: "/www/web.io"
  certbot: pigsty.doc
  enforce_https: true

Custom Config Snippet

web.cc:
  domain: pigsty.io
  path: "/www/web.io"
  domains: [ en.pigsty.io ]
  certbot: pigsty.doc
  config: |
    # rewrite /en/ to /
        location /en/ {
            rewrite ^/en/(.*)$ /$1 permanent;
        }

Management Commands

./infra.yml -t nginx           # Full Nginx reconfiguration
./infra.yml -t nginx_config    # Regenerate config files
./infra.yml -t nginx_launch    # Restart Nginx service
./infra.yml -t nginx_cert      # Regenerate SSL certificates
./infra.yml -t nginx_certbot   # Sign certificates with certbot
./infra.yml -t nginx_reload    # Reload Nginx configuration

Domain Resolution

Three ways to resolve domains to Pigsty servers:

  1. Public domains: Configure via DNS provider
  2. Internal DNS server: Configure internal DNS resolution
  3. Local hosts file: Modify /etc/hosts

For local development, add to /etc/hosts:

<your_public_ip_address> i.pigsty g.pigsty p.pigsty a.pigsty

Pigsty includes dnsmasq service, configurable via dns_records parameter for internal DNS resolution.


HTTPS Configuration

Configure HTTPS via nginx_sslmode parameter:

ModeDescription
disableListen HTTP only (nginx_port)
enableAlso listen HTTPS (nginx_ssl_port), default self-signed cert
enforceForce redirect to HTTPS, all port 80 requests get 301 redirect

For self-signed certificates, several access options:

  • Trust the self-signed CA in browser (download at http://<ip>/ca.crt)
  • Use browser security bypass (type “thisisunsafe” in Chrome)
  • Configure proper CA-signed certs or Let’s Encrypt for production

Certbot Certificates

Pigsty supports using Certbot to request free Let’s Encrypt certificates.

Enable Certbot

  1. Add certbot parameter to services in infra_portal, specifying cert name
  2. Configure certbot_email with a valid email
  3. Set certbot_sign to true for auto-signing during deployment
certbot_sign: true
certbot_email: [email protected]

Manual Certificate Signing

./infra.yml -t nginx_certbot   # Sign Let's Encrypt certificates

Or run the scripts directly on the server:

/etc/nginx/sign-cert           # Sign certificates
/etc/nginx/link-cert           # Link certificates to Nginx config directory

For more info, see Certbot: Request and Renew HTTPS Certificates


Default Homepage

Pigsty’s default home server provides these built-in routes:

PathDescription
/Homepage navigation
/ui/Grafana monitoring dashboards
/vmetrics/VictoriaMetrics VMUI
/vlogs/VictoriaLogs log query
/vtraces/VictoriaTraces tracing
/vmalert/VMAlert alerting rules
/alertmgr/AlertManager alert management
/blackbox/Blackbox Exporter
/pevPostgreSQL Explain visualization
/haproxy/<cluster>/HAProxy admin interface (if any)

These routes allow accessing all monitoring components through a single entry point, no need for multiple domain configurations.


Best Practices

  • Use domain names instead of IP:PORT for service access
  • Properly configure DNS resolution or hosts file
  • Enable WebSocket for real-time apps (e.g., Grafana, Jupyter)
  • Enable HTTPS for production
  • Use meaningful subdomains to organize services
  • Monitor Let’s Encrypt certificate expiration
  • Use config parameter for custom Nginx configurations

Full Example

Here’s the Nginx configuration used by Pigsty’s public demo site demo.pigsty.io:

infra_portal:
  home         : { domain: i.pigsty }
  io           : { domain: pigsty.io      ,path: "/www/pigsty.io"   ,cert: /etc/cert/pigsty.io.crt ,key: /etc/cert/pigsty.io.key }
  minio        : { domain: m.pigsty.io    ,endpoint: "${admin_ip}:9001" ,scheme: https ,websocket: true }
  postgrest    : { domain: api.pigsty.io  ,endpoint: "127.0.0.1:8884" }
  pgadmin      : { domain: adm.pigsty.io  ,endpoint: "127.0.0.1:8885" }
  pgweb        : { domain: cli.pigsty.io  ,endpoint: "127.0.0.1:8886" }
  bytebase     : { domain: ddl.pigsty.io  ,endpoint: "127.0.0.1:8887" }
  jupyter      : { domain: lab.pigsty.io  ,endpoint: "127.0.0.1:8888" ,websocket: true }
  gitea        : { domain: git.pigsty.io  ,endpoint: "127.0.0.1:8889" }
  wiki         : { domain: wiki.pigsty.io ,endpoint: "127.0.0.1:9002" }
  noco         : { domain: noco.pigsty.io ,endpoint: "127.0.0.1:9003" }
  supa         : { domain: supa.pigsty.io ,endpoint: "10.10.10.10:8000" ,websocket: true }
  dify         : { domain: dify.pigsty.io ,endpoint: "10.10.10.10:8001" ,websocket: true }
  odoo         : { domain: odoo.pigsty.io ,endpoint: "127.0.0.1:8069"   ,websocket: true }
  mm           : { domain: mm.pigsty.io   ,endpoint: "10.10.10.10:8065" ,websocket: true }

4 - Software Repository

Managing local APT/YUM software repositories

Pigsty supports creating and managing local APT/YUM software repositories for offline deployment or accelerated package installation.


Quick Start

To add packages to the local repository:

  1. Add packages to repo_packages (default packages)
  2. Add packages to repo_extra_packages (extra packages)
  3. Run the build command:
./infra.yml -t repo_build   # Build local repo from upstream
./node.yml -t node_repo     # Refresh node repository cache

Package Aliases

Pigsty predefines common package combinations for batch installation:

EL Systems (RHEL/CentOS/Rocky)

AliasDescription
node-bootstrapAnsible, Python3 tools, SSH related
infra-packageNginx, etcd, HAProxy, monitoring exporters, MinIO
pgsql-utilityPatroni, pgBouncer, pgBackRest, PG tools
pgsqlFull PostgreSQL (server, client, extensions)
pgsql-miniMinimal PostgreSQL installation

Debian/Ubuntu Systems

AliasDescription
node-bootstrapAnsible, development tools
infra-packageInfrastructure components (Debian naming)
pgsql-clientPostgreSQL client
pgsql-serverPostgreSQL server and related packages

Playbook Tasks

Main Tasks

TaskDescription
repoCreate local repo from internet or offline packages
repo_buildBuild from upstream if not exists
repo_upstreamAdd upstream repository files
repo_pkgDownload packages and dependencies
repo_createCreate/update YUM or APT repository
repo_nginxStart Nginx file server

Complete Task List

./infra.yml -t repo_dir          # Create local repository directory
./infra.yml -t repo_check        # Check if local repo exists
./infra.yml -t repo_prepare      # Use existing repo directly
./infra.yml -t repo_build        # Build repo from upstream
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_remove       # Delete existing repo files
./infra.yml -t repo_add          # Add repo to system directory
./infra.yml -t repo_url_pkg      # Download packages from internet
./infra.yml -t repo_cache        # Create metadata cache
./infra.yml -t repo_boot_pkg     # Install bootstrap packages
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local repository
./infra.yml -t repo_use          # Add new repo to system
./infra.yml -t repo_nginx        # Start Nginx file server

Common Operations

Add New Packages

# 1. Configure upstream repositories
./infra.yml -t repo_upstream

# 2. Download packages and dependencies
./infra.yml -t repo_pkg

# 3. Build local repository metadata
./infra.yml -t repo_create

Refresh Node Repositories

./node.yml -t node_repo    # Refresh repository cache on all nodes

Complete Repository Rebuild

./infra.yml -t repo        # Create repo from internet or offline packages

5 - Domain Management

Configure local or public domain names to access Pigsty services.

Use domain names instead of IP addresses to access Pigsty’s various web services.


Quick Start

Add the following static resolution records to /etc/hosts:

10.10.10.10 i.pigsty g.pigsty p.pigsty a.pigsty

Replace IP address with your actual Pigsty node’s IP.


Why Use Domain Names

  • Easier to remember than IP addresses
  • Flexible pointing to different IPs
  • Unified service management through Nginx
  • Support for HTTPS encryption
  • Prevent ISP hijacking in some regions
  • Allow access to internally bound services via proxy

DNS Mechanism

DNS Protocol: Resolves domain names to IP addresses. Multiple domains can point to same IP.

HTTP Protocol: Uses Host header to route requests to different sites on same port (80/443).


Default Domains

Pigsty predefines the following default domains:

DomainServicePortPurpose
i.pigstyNginx80/443Default homepage, local repo, unified entry
g.pigstyGrafana3000Monitoring and visualization
p.pigstyVictoriaMetrics8428VMUI/PromQL entry
a.pigstyAlertManager9059Alert routing
m.pigstyMinIO9001Object storage console

Resolution Methods

Local Static Resolution

Add entries to /etc/hosts on the client machine:

# Linux/macOS
sudo vim /etc/hosts

# Windows
notepad C:\Windows\System32\drivers\etc\hosts

Add content:

10.10.10.10 i.pigsty g.pigsty p.pigsty a.pigsty m.pigsty

Internal Dynamic Resolution

Pigsty includes dnsmasq as an internal DNS server. Configure managed nodes to use INFRA node as DNS server:

node_dns_servers: ['${admin_ip}']   # Use INFRA node as DNS server
node_dns_method: add                # Add to existing DNS server list

Configure domain records resolved by dnsmasq via dns_records:

dns_records:
  - "${admin_ip} i.pigsty"
  - "${admin_ip} m.pigsty sss.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

Public Domain Names

Purchase a domain and add DNS A record pointing to public IP:

  1. Purchase domain from registrar (e.g., example.com)
  2. Configure A record pointing to server public IP
  3. Use real domain in infra_portal

Built-in DNS Service

Pigsty runs dnsmasq on INFRA nodes as a DNS server.

ParameterDefaultDescription
dns_enabledtrueEnable DNS service
dns_port53DNS listen port
dns_recordsSee belowDefault DNS records

Default DNS records:

dns_records:
  - "${admin_ip} i.pigsty"
  - "${admin_ip} m.pigsty sss.pigsty api.pigsty adm.pigsty cli.pigsty ddl.pigsty"

Dynamic DNS Registration

Pigsty automatically registers DNS records for PostgreSQL clusters and instances:

  • Instance-level DNS: <pg_instance> points to instance IP (e.g., pg-meta-1)
  • Cluster-level DNS: <pg_cluster> points to primary IP or VIP (e.g., pg-meta)

Cluster-level DNS target controlled by pg_dns_target:

ValueDescription
autoAuto-select: use VIP if available, else primary IP
primaryAlways point to primary IP
vipAlways point to VIP (requires VIP enabled)
noneDon’t register cluster DNS
<ip>Specify fixed IP address

Add suffix to cluster DNS via pg_dns_suffix.


Node DNS Configuration

Pigsty manages DNS configuration on managed nodes.

Static hosts Records

Configure static /etc/hosts records via node_etc_hosts:

node_etc_hosts:
  - "${admin_ip} i.pigsty sss.pigsty"
  - "10.10.10.20 db.example.com"

DNS Server Configuration

ParameterDefaultDescription
node_dns_methodaddDNS config method
node_dns_servers['${admin_ip}']DNS server list
node_dns_optionsSee belowresolv.conf options

node_dns_method options:

ValueDescription
addPrepend to existing DNS server list
overwriteCompletely overwrite DNS config
noneDon’t modify DNS config

Default DNS options:

node_dns_options:
  - options single-request-reopen timeout:1

HTTPS Certificates

Pigsty uses self-signed certificates by default. Options include:

  • Ignore warnings, use HTTP
  • Trust self-signed CA certificate (download at http://<ip>/ca.crt)
  • Use real CA or get free public domain certs via Certbot

See CA and Certificates documentation for details.


Extended Domains

Pigsty reserves the following domains for various application services:

DomainPurpose
adm.pigstyPgAdmin interface
ddl.pigstyBytebase DDL management
cli.pigstyPgWeb CLI interface
api.pigstyPostgREST API service
lab.pigstyJupyter environment
git.pigstyGitea Git service
wiki.pigstyWiki.js docs
noco.pigstyNocoDB
supa.pigstySupabase
dify.pigstyDify AI
odoo.pigstyOdoo ERP
mm.pigstyMattermost

Using these domains requires configuring corresponding services in infra_portal.


Management Commands

./infra.yml -t dns            # Full DNS service configuration
./infra.yml -t dns_config     # Regenerate dnsmasq config
./infra.yml -t dns_record     # Update default DNS records
./infra.yml -t dns_launch     # Restart dnsmasq service

./node.yml -t node_hosts      # Configure node /etc/hosts
./node.yml -t node_resolv     # Configure node DNS resolver

./pgsql.yml -t pg_dns         # Register PostgreSQL DNS records
./pgsql.yml -t pg_dns_ins     # Register instance-level DNS only
./pgsql.yml -t pg_dns_cls     # Register cluster-level DNS only

6 - Module Management

INFRA module management SOP: define, create, destroy, scale out, scale in

This document covers daily management operations for the INFRA module, including installation, uninstallation, scaling, and component maintenance.


Install INFRA Module

Use the infra.yml playbook to install the INFRA module on the infra group:

./infra.yml     # Install INFRA module on infra group

Uninstall INFRA Module

Use the infra-rm.yml playbook to uninstall the INFRA module from the infra group:

./infra-rm.yml  # Uninstall INFRA module from infra group

Scale Out INFRA Module

Assign infra_seq to new nodes and add them to the infra group in the inventory:

all:
  children:
    infra:
      hosts:
        10.10.10.10: { infra_seq: 1 }  # Existing node
        10.10.10.11: { infra_seq: 2 }  # New node

Use the -l limit option to execute the playbook on the new node only:

./infra.yml -l 10.10.10.11    # Install INFRA module on new node

Manage Local Repository

Local repository management tasks:

./infra.yml -t repo              # Create repo from internet or offline packages
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local yum/apt repository

Complete subtask list:

./infra.yml -t repo_dir          # Create local repository directory
./infra.yml -t repo_check        # Check if local repo exists
./infra.yml -t repo_prepare      # Use existing repo directly
./infra.yml -t repo_build        # Build repo from upstream
./infra.yml -t repo_upstream     # Add upstream repositories
./infra.yml -t repo_remove       # Delete existing repo files
./infra.yml -t repo_add          # Add repo to system directory
./infra.yml -t repo_url_pkg      # Download packages from internet
./infra.yml -t repo_cache        # Create metadata cache
./infra.yml -t repo_boot_pkg     # Install bootstrap packages
./infra.yml -t repo_pkg          # Download packages and dependencies
./infra.yml -t repo_create       # Create local repository
./infra.yml -t repo_use          # Add new repo to system
./infra.yml -t repo_nginx        # Start Nginx file server

Manage Nginx

Nginx management tasks:

./infra.yml -t nginx                       # Reset Nginx component
./infra.yml -t nginx_index                 # Re-render homepage
./infra.yml -t nginx_config,nginx_reload   # Re-render config and reload

Request HTTPS certificate:

./infra.yml -t nginx_certbot,nginx_reload -e certbot_sign=true

Manage Infrastructure Components

Management commands for various infrastructure components:

./infra.yml -t infra           # Configure infrastructure
./infra.yml -t infra_env       # Configure environment variables
./infra.yml -t infra_pkg       # Install packages
./infra.yml -t infra_user      # Set up OS user
./infra.yml -t infra_cert      # Issue certificates
./infra.yml -t dns             # Configure DNSMasq
./infra.yml -t nginx           # Configure Nginx
./infra.yml -t victoria        # Configure VictoriaMetrics/Logs/Traces
./infra.yml -t alertmanager    # Configure AlertManager
./infra.yml -t blackbox        # Configure Blackbox Exporter
./infra.yml -t grafana         # Configure Grafana
./infra.yml -t infra_register  # Register to VictoriaMetrics/Grafana

Common maintenance commands:

./infra.yml -t nginx_index                        # Re-render homepage
./infra.yml -t nginx_config,nginx_reload          # Reconfigure and reload
./infra.yml -t vmetrics_config,vmetrics_launch    # Regenerate VictoriaMetrics config and restart
./infra.yml -t vlogs_config,vlogs_launch          # Update VictoriaLogs config
./infra.yml -t grafana_plugin                     # Download Grafana plugins

7 - CA and Certificates

Using self-signed CA or real HTTPS certificates

Pigsty uses a self-signed Certificate Authority (CA) by default for internal SSL/TLS encryption. This document covers:


Self-Signed CA

Pigsty automatically creates a self-signed CA during infrastructure initialization (infra.yml). The CA signs certificates for:

  • PostgreSQL server/client SSL
  • Patroni REST API
  • etcd cluster communication
  • MinIO cluster communication
  • Nginx HTTPS (fallback)
  • Infrastructure services

PKI Directory Structure

files/pki/
├── ca/
│   ├── ca.key                # CA private key (keep secure!)
│   └── ca.crt                # CA certificate
├── csr/                      # Certificate signing requests
│   ├── misc/                     # Miscellaneous certificates (cert.yml output)
│   ├── etcd/                     # ETCD certificates
│   ├── pgsql/                    # PostgreSQL certificates
│   ├── minio/                    # MinIO certificates
│   ├── nginx/                    # Nginx certificates
│   └── mongo/                    # FerretDB certificates
└── infra/                    # Infrastructure certificates

CA Variables

VariableDefaultDescription
ca_createtrueCreate CA if not exists, or abort
ca_cnpigsty-caCA certificate common name
cert_validity7300dDefault validity for issued certificates
VariableDefault
:—————-————–—————————————-
CA Certificate100 yearsHardcoded (36500 days)
Server/Client20 yearscert_validity (7300d)
Nginx HTTPS~1 yearnginx_cert_validity (397d)
> Note: Browser vendors limit trust to 398-day certificates. Nginx uses shorter validity for browser compatibility.

Using External CA

To use your own enterprise CA instead of auto-generated one:

1. Set ca_create: false in your configuration.

2. Place your CA files before running playbook:

mkdir -p files/pki/ca
cp /path/to/your/ca.key files/pki/ca/ca.key
cp /path/to/your/ca.crt files/pki/ca/ca.crt
chmod 600 files/pki/ca/ca.key
chmod 644 files/pki/ca/ca.crt

3. Run ./infra.yml


Backup CA Files

The CA private key is critical. Back it up securely:

# Backup with timestamp
tar -czvf pigsty-ca-$(date +%Y%m%d).tar.gz files/pki/ca/

Warning: If you lose CA private key, all certificates signed by it become unverifiable. You’ll need to regenerate everything.


Issue Certificates

Use cert.yml to issue additional certificates signed by Pigsty CA.

Basic Usage

# Issue certificate for database user (client cert)
./cert.yml -e cn=dbuser_dba

# Issue certificate for monitor user
./cert.yml -e cn=dbuser_monitor

Certificates generated in files/pki/misc/<cn>.{key,crt} by default.

Parameters

ParameterDefaultDescription
cnpigstyCommon Name (required)
san[DNS:localhost, IP:127.0.0.1]Subject Alternative Names
orgpigstyOrganization name
unitpigstyOrganizational unit name
expire7300dCertificate validity (20 years)
keyfiles/pki/misc/<cn>.keyPrivate key output path
crtfiles/pki/misc/<cn>.crtCertificate output path

Advanced Examples

# Issue certificate with custom SAN (DNS and IP)
./cert.yml -e cn=myservice -e san=DNS:myservice,IP:10.2.82.163

(File has more lines. Use ‘offset’ parameter to read beyond line 130)

8 - Grafana High Availability: Using PostgreSQL Backend

Use PostgreSQL instead of SQLite as Grafana’s remote storage backend for better performance and availability.

You can use PostgreSQL as Grafana’s backend database.

This is a great opportunity to understand Pigsty’s deployment system. By completing this tutorial, you’ll learn:


TL;DR

vi pigsty.yml # Uncomment DB/User definitions: dbuser_grafana  grafana
bin/pgsql-user  pg-meta  dbuser_grafana
bin/pgsql-db    pg-meta  grafana

psql postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana -c \
  'CREATE TABLE t(); DROP TABLE t;' # Verify connection string works

vi /etc/grafana/grafana.ini # Modify [database] type url
systemctl restart grafana-server

Create Database Cluster

We can define a new database grafana on pg-meta, or create a dedicated Grafana database cluster pg-grafana on new nodes.

Define Cluster

To create a new dedicated cluster pg-grafana on machines 10.10.10.11 and 10.10.10.12, use this config:

pg-grafana:
  hosts:
    10.10.10.11: {pg_seq: 1, pg_role: primary}
    10.10.10.12: {pg_seq: 2, pg_role: replica}
  vars:
    pg_cluster: pg-grafana
    pg_databases:
      - name: grafana
        owner: dbuser_grafana
        revokeconn: true
        comment: grafana primary database
    pg_users:
      - name: dbuser_grafana
        password: DBUser.Grafana
        pgbouncer: true
        roles: [dbrole_admin]
        comment: admin user for grafana database

Create Cluster

Use this command to create the pg-grafana cluster: pgsql.yml.

./pgsql.yml -l pg-grafana    # Initialize pg-grafana cluster

This command is the Ansible Playbook pgsql.yml for creating database clusters.

Users and databases defined in pg_users and pg_databases are automatically created during cluster initialization. With this config, after cluster creation (without DNS), you can access the database using these connection strings (any one works):

postgres://dbuser_grafana:[email protected]:5432/grafana # Direct primary connection
postgres://dbuser_grafana:[email protected]:5436/grafana # Direct default service
postgres://dbuser_grafana:[email protected]:5433/grafana # Primary read-write service

postgres://dbuser_grafana:[email protected]:5432/grafana # Direct primary connection
postgres://dbuser_grafana:[email protected]:5436/grafana # Direct default service
postgres://dbuser_grafana:[email protected]:5433/grafana # Primary read-write service

Since Pigsty is installed on a single meta node by default, the following steps will create Grafana’s user and database on the existing pg-meta cluster, not the pg-grafana cluster created here.


Create Grafana Business User

The usual convention for business object management: create user first, then database. Because if the database has an owner configured, it depends on the corresponding user.

Define User

To create user dbuser_grafana on the pg-meta cluster, first add this user definition to pg-meta’s cluster definition:

Location: all.children.pg-meta.vars.pg_users

- name: dbuser_grafana
  password: DBUser.Grafana
  comment: admin user for grafana database
  pgbouncer: true
  roles: [ dbrole_admin ]

If you define a different password here, replace the corresponding parameter in subsequent steps

Create User

Use this command to create the dbuser_grafana user (either works):

bin/pgsql-user pg-meta dbuser_grafana # Create `dbuser_grafana` user on pg-meta cluster

This actually calls the Ansible Playbook pgsql-user.yml to create the user:

./pgsql-user.yml -l pg-meta -e pg_user=dbuser_grafana  # Ansible

The dbrole_admin role has permission to execute DDL changes in the database, which is exactly what Grafana needs.


Create Grafana Business Database

Define Database

Creating a business database follows the same pattern as users. First add the new database grafana definition to pg-meta’s cluster definition.

Location: all.children.pg-meta.vars.pg_databases

- { name: grafana, owner: dbuser_grafana, revokeconn: true }

Create Database

Use this command to create the grafana database (either works):

bin/pgsql-db pg-meta grafana # Create `grafana` database on `pg-meta` cluster

This actually calls the Ansible Playbook pgsql-db.yml to create the database:

./pgsql-db.yml -l pg-meta -e pg_database=grafana # Actual Ansible playbook executed

Use Grafana Business Database

Verify Connection String Reachability

You can access the database using different services or access methods, for example:

postgres://dbuser_grafana:DBUser.Grafana@meta:5432/grafana # Direct connection
postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana # Default service
postgres://dbuser_grafana:DBUser.Grafana@meta:5433/grafana # Primary service

Here, we’ll use the Default service that directly accesses the primary through load balancer.

First verify the connection string is reachable and has DDL execution permissions:

psql postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana -c \
  'CREATE TABLE t(); DROP TABLE t;'

Directly Modify Grafana Config

To make Grafana use a Postgres datasource, edit /etc/grafana/grafana.ini and modify the config:

[database]
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
;url =

Change the default config to:

[database]
type = postgres
url =  postgres://dbuser_grafana:DBUser.Grafana@meta/grafana

Then restart Grafana:

systemctl restart grafana-server

When you see activity in the newly added grafana database from the monitoring system, Grafana is now using Postgres as its primary backend database. But there’s a new issue—the original Dashboards and Datasources in Grafana have disappeared! You need to re-import dashboards and Postgres datasources.


Manage Grafana Dashboards

As admin user, navigate to the files/grafana directory under the Pigsty directory and run grafana.py init to reload Pigsty dashboards.

cd ~/pigsty/files/grafana
./grafana.py init    # Initialize Grafana dashboards using Dashboards in current directory

Execution result:

vagrant@meta:~/pigsty/files/grafana
$ ./grafana.py init
Grafana API: admin:pigsty @ http://10.10.10.10:3000
init dashboard : home.json
init folder pgcat
init dashboard: pgcat / pgcat-table.json
init dashboard: pgcat / pgcat-bloat.json
init dashboard: pgcat / pgcat-query.json
init folder pgsql
init dashboard: pgsql / pgsql-replication.json
...

This script detects the current environment (defined in ~/pigsty during installation), gets Grafana access info, and replaces dashboard URL placeholder domains (*.pigsty) with actual domains used.

export GRAFANA_ENDPOINT=http://10.10.10.10:3000
export GRAFANA_USERNAME=admin
export GRAFANA_PASSWORD=pigsty

export NGINX_UPSTREAM_YUMREPO=yum.pigsty
export NGINX_UPSTREAM_CONSUL=c.pigsty
export NGINX_UPSTREAM_PROMETHEUS=p.pigsty
export NGINX_UPSTREAM_ALERTMANAGER=a.pigsty
export NGINX_UPSTREAM_GRAFANA=g.pigsty
export NGINX_UPSTREAM_HAPROXY=h.pigsty

As a side note, use grafana.py clean to clear target dashboards, and grafana.py load to load all dashboards from the current directory. When Pigsty dashboards change, use these two commands to upgrade all dashboards.

Manage Postgres Datasources

When creating a new PostgreSQL cluster with pgsql.yml or a new business database with pgsql-db.yml, Pigsty registers new PostgreSQL datasources in Grafana. You can directly access target database instances through Grafana using the default monitoring user. Most pgcat application features depend on this.

To register Postgres databases, use the register_grafana task in pgsql.yml:

./pgsql.yml -t register_grafana             # Re-register all Postgres datasources in current environment
./pgsql.yml -t register_grafana -l pg-test  # Re-register all databases in pg-test cluster

One-Step Grafana Upgrade

You can directly modify the Pigsty config file to change Grafana’s backend datasource, completing the database switch in one step. Edit the grafana_pgurl parameter in pigsty.yml:

grafana_pgurl: postgres://dbuser_grafana:DBUser.Grafana@meta:5436/grafana

Then re-run the grafana task from infra.yml to complete the Grafana upgrade:

./infra.yml -t grafana