pg_exporter is built around a few simple production-oriented principles:
Local-first connectivity: fall back to postgresql:///?sslmode=disable when no explicit URL is provided, which fits same-host deployments
Declarative collection: metric behavior is driven by YAML collector definitions with precise control over ttl, timeout, tags, and fatal
Dynamic planning: choose collector branches at runtime based on server version, role, extensions, and tags
Keep serving under failure: use non-blocking startup by default so HTTP endpoints still come up while the database is temporarily unavailable
Hot reload: support POST / GET /reload and SIGHUP reloads, with extra SIGUSR1 support on non-Windows platforms
Split probes from traffic: health endpoints use cached background probes instead of blocking the database on every request
Tighten the management surface: /reload, /explain, and /stat expose runtime and config details, so production deployments should protect them with --web.config.file or keep them internal
Installation
PG Exporter provides multiple installation methods to fit your infrastructure:
docker run -d --name pg_exporter -p 9630:9630 -e PG_EXPORTER_URL="postgres://user:pass@host:5432/postgres" pgsty/pg_exporter:latest
# RPM-based systemssudo tee /etc/yum.repos.d/pigsty-infra.repo > /dev/null <<-'EOF'
[pigsty-infra]
name=Pigsty Infra for $basearch
baseurl=https://repo.pigsty.io/yum/infra/$basearch
enabled = 1
gpgcheck = 0
module_hotfixes=1
EOFsudo yum makecache;sudo yum install -y pg_exporter
sudo tee /etc/apt/sources.list.d/pigsty-infra.list > /dev/null <<EOF
deb [trusted=yes] https://repo.pigsty.io/apt/infra generic main
EOFsudo apt update;sudo apt install -y pg-exporter
# Build from sourcegit clone https://github.com/pgsty/pg_exporter.git
cd pg_exporter
make build
Quick Start
Get PG Exporter up and running in minutes with Getting Started:
# Minimal startup with the local-first default URLpg_exporter
# Or point to a specific targetPG_EXPORTER_URL='postgres://user:pass@localhost:5432/postgres' pg_exporter
# Access metricscurl http://localhost:9630/metrics
# Reload configuration online (POST recommended)curl -X POST http://localhost:9630/reload
The default config supports PostgreSQL 10-18+; PostgreSQL 9.1-9.6 requires the legacy/ config bundle
pgBouncer 1.8-1.25+ is supported
Design Rationale
pg_exporter follows three core runtime principles:
Local-first connectivity: if you do not pass --url or PG_EXPORTER_URL, it falls back to postgresql:///?sslmode=disable
Declarative collection: all business metrics come from YAML collectors, and runtime planning picks branches by version, role, extension, and tags
Keep serving under failure: non-blocking startup is the default, so HTTP endpoints still come up while the target database is temporarily unavailable
Prerequisites
Before you begin, ensure you have:
PostgreSQL 10+ or pgBouncer 1.8+ instance to monitor
A user account with appropriate permissions for monitoring
Prometheus Compatible System (for metrics scraping)
Basic understanding of PostgreSQL connection strings
Quick Start
The fastest way to get started with PG Exporter:
# Example: install the Linux amd64 release tarballwget https://github.com/pgsty/pg_exporter/releases/download/v1.2.2/pg_exporter-1.2.2.linux-amd64.tar.gz
tar -xf pg_exporter-1.2.2.linux-amd64.tar.gz
sudo install pg_exporter-1.2.2.linux-amd64/pg_exporter /usr/bin/
sudo install pg_exporter-1.2.2.linux-amd64/pg_exporter.yml /etc/pg_exporter.yml
# Run with the local-first default URLpg_exporter
# Or point to a PostgreSQL / pgBouncer target explicitlyPG_EXPORTER_URL='postgres://user:pass@localhost:5432/postgres' pg_exporter
# Verify metrics are availablecurl http://localhost:9630/metrics
Understanding the Basics
Connection String
PG Exporter uses standard PostgreSQL connection URLs:
PG Exporter provides 4 core built-in metrics out of the box:
Metric
Type
Description
pg_up
Gauge
1 if exporter can connect to PostgreSQL, 0 otherwise
pg_version
Gauge
PostgreSQL server version number
pg_in_recovery
Gauge
1 if server is in recovery mode (replica), 0 if primary
pg_exporter_build_info
Gauge
Exporter version and build information
The exporter also exposes pg_exporter_* self-monitoring metrics by default. You can disable them with --disable-intro.
Configuration File
All other metrics (600+) are defined in the pg_exporter.yml configuration file. By default, PG Exporter looks for this file in:
Path specified by --config flag
Path in PG_EXPORTER_CONFIG environment variable
Current directory (./pg_exporter.yml)
System config (/etc/pg_exporter.yml or /etc/pg_exporter/)
Your First Monitoring Setup
Step 1: Create a Monitoring User
Create a dedicated PostgreSQL user for monitoring:
-- Create monitoring user
CREATEUSERpg_monitorWITHPASSWORD'secure_password';-- Grant necessary permissions
GRANTpg_monitorTOpg_monitor;GRANTCONNECTONDATABASEpostgresTOpg_monitor;-- For PostgreSQL 10+, pg_monitor role provides read access to monitoring views
-- For older versions, you may need additional grants
Step 2: Test Connection
Verify the exporter can connect to your database:
# Set connection URLexportPG_EXPORTER_URL='postgres://pg_monitor:secure_password@localhost:5432/postgres'# Run in dry-run mode to test configurationpg_exporter --dry-run
Step 3: Run the Exporter
Start PG Exporter:
# Run with default settingspg_exporter
# Or with custom flagspg_exporter \
--url='postgres://pg_monitor:secure_password@localhost:5432/postgres'\
--web.listen-address=':9630'\
--log.level=info
Step 4: Configure Prometheus
Add PG Exporter as a target in your prometheus.yml:
# View raw metricscurl http://localhost:9630/metrics | grep pg_
# Check exporter statisticscurl http://localhost:9630/stat
# Review current query planningcurl http://localhost:9630/explain
Note
/reload, /stat, and /explain are management endpoints. In production, protect them with --web.config.file or expose them only on trusted internal networks.
Auto-Discovery Mode
PG Exporter can automatically discover and monitor all databases in a PostgreSQL instance:
# Enable auto-discovery (default behavior)pg_exporter --auto-discovery
# Exclude specific databasespg_exporter --auto-discovery \
--exclude-database="template0,template1,postgres"# Include only specific databasespg_exporter --auto-discovery \
--include-database="app_db,analytics_db"
When auto-discovery is enabled:
Cluster-level metrics (1xx-5xx) are collected once per instance
Database-level metrics (6xx-8xx) are collected for each discovered database
Metrics are labeled with datname to distinguish between databases
Monitoring pgBouncer
To monitor pgBouncer instead of PostgreSQL:
# Connect to pgBouncer admin databasePG_EXPORTER_URL='postgres://pgbouncer:password@localhost:6432/pgbouncer'\
pg_exporter --config=/etc/pg_exporter.yml
PG Exporter provides health check endpoints for load balancers and orchestrators:
# Basic health checkcurl http://localhost:9630/up
# Returns: 200 if connected, 503 if not# Primary detectioncurl http://localhost:9630/primary
# Returns: 200 if primary, 404 if replica, 503 if unknown# Replica detectioncurl http://localhost:9630/replica
# Returns: 200 if replica, 404 if primary, 503 if unknown
Troubleshooting
Connection Issues
# Test with detailed loggingpg_exporter --log.level=debug --dry-run
# Check server planningpg_exporter --explain
Permission Errors
Ensure the monitoring user has necessary permissions:
-- Check current permissions
SELECT*FROMpg_rolesWHERErolname='pg_monitor';-- Grant additional permissions if needed
GRANTUSAGEONSCHEMApg_catalogTOpg_monitor;GRANTSELECTONALLTABLESINSCHEMApg_catalogTOpg_monitor;
PG Exporter provides multiple installation methods to suit different deployment scenarios.
This guide covers all available installation options with detailed instructions for each platform.
Pigsty
The easiest way to get started with pg_exporter is to use Pigsty,
which is a complete PostgreSQL distribution with built-in Observability best practices based on pg_exporter, Prometheus, and Grafana.
You don’t even need to know any details about pg_exporter, it just gives you all the metrics and dashboard panels
You can install it directly with your OS package manager (rpm/dpkg), or just place the binary in your $PATH. Current tarballs also include pg_exporter.yml, package/pg_exporter.default, package/pg_exporter.service, and LICENSE for manual deployments.
Repository
The pig package is also available in the pigsty-infra repo,
You can add the repo to your system, and install it with OS package manager:
YUM
For EL distribution such as RHEL,RockyLinux,CentOS,Alma Linux,OracleLinux,…:
sudo tee /etc/yum.repos.d/pigsty-infra.repo > /dev/null <<-'EOF'
[pigsty-infra]
name=Pigsty Infra for $basearch
baseurl=https://repo.pigsty.io/yum/infra/$basearch
enabled = 1
gpgcheck = 0
module_hotfixes=1
EOFsudo yum makecache;sudo yum install -y pg_exporter
APT
For Debian, Ubuntu and compatible Linux Distributions:
sudo tee /etc/apt/sources.list.d/pigsty-infra.list > /dev/null <<EOF
deb [trusted=yes] https://repo.pigsty.io/apt/infra generic main
EOFsudo apt update;sudo apt install -y pg-exporter
Docker
We have prebuilt docker images for amd64 and arm64 architectures on docker hub: pgsty/pg_exporter.
The current Docker image is built from scratch. If you connect to remote PostgreSQL with sslmode=verify-ca or verify-full, mount an explicit CA certificate (sslrootcert or a system CA bundle), otherwise TLS verification may fail.
Binary
pg_exporter can be installed as a standalone binary. Download the tarball for your platform from the release page, extract it, and place the binary in your $PATH.
Compatibility
The default configuration supports PostgreSQL 10 and above. For EOL PostgreSQL versions, use the bundled legacy/ config package for compatible monitoring.
PostgreSQL Version
Support Status
10 ~ 18+
✅ Full Support (default config)
9.1 ~ 9.6
⚠️ Use legacy/pg_exporter.yml
9.0 and earlier
❌ Unsupported
Legacy config example:
make conf9
PG_EXPORTER_CONFIG=legacy/pg_exporter.yml pg_exporter
pg_exporter works with pgBouncer 1.8+, since v1.8 is the first version with SHOW command support.
pgBouncer Version
Support Status
1.8.x ~ 1.25+
✅ Full Support
before 1.8.x
⚠️ No Metrics
3 - Configuration
PG Exporter uses a powerful and flexible configuration system that allows you to define custom metrics, control collection behavior, and optimize performance.
This guide covers all aspects of configuration from basic setup to advanced customization.
Metrics Collectors
PG Exporter uses a declarative YAML configuration system that provides incredible flexibility and control over metric collection. This guide covers all aspects of configuring PG Exporter for your specific monitoring needs.
Configuration Overview
PG Exporter’s configuration is centered around collectors - individual metric queries with associated metadata. The configuration can be:
A single monolithic YAML file (pg_exporter.yml)
A directory containing multiple YAML files (merged alphabetically)
Custom path specified via command-line or environment variable
Configuration Loading
PG Exporter searches for configuration in the following order:
Only .yml / .yaml files in that directory are loaded, non-recursively
Files are merged in lexicographic order; later files override earlier collector definitions with the same top-level name
If a config directory contains YAML files but every one of them fails to parse, the exporter returns an error instead of silently ignoring the directory
Collector Structure
Each collector is a top-level object in the YAML configuration with a unique name and various properties:
collector_branch_name:# Unique identifier for this collectorname:metric_namespace # Metric prefix (defaults to branch name)desc:"Collector description"# Human-readable descriptionquery:| # SQL query to executeSELECT column1, column2FROM table# Execution Controlttl:10# Cache time-to-live in secondstimeout:0.1# Query timeout in secondsfatal:false# If true, failure fails entire scrapeskip:false# If true, collector is disabled# Version Compatibilitymin_version:100000# Minimum PostgreSQL version (inclusive)max_version:999999# Maximum PostgreSQL version (exclusive)# Execution Tagstags:[cluster, primary] # Conditions for execution# Predicate Queries (optional)predicate_queries:- name:"check_function"predicate_query:| SELECT EXISTS (...)# Metric Definitionsmetrics:- column_name:usage:GAUGE # GAUGE, COUNTER, LABEL, or DISCARDrename: metric_name # Optional:rename the metricdescription:"Help text"# Metric descriptiondefault:0# Default value if NULLscale:1000# Scale factor for the value
Validation rules as of v1.2.2:
Each entry in metrics must define exactly one column mapping
Each collector must expose at least one GAUGE or COUNTER column
usage only accepts GAUGE, COUNTER, LABEL, or DISCARD
Metric names and label names are validated against Prometheus naming rules during load; invalid configs fail fast
Constant labels are checked for conflicts during load; they cannot overlap with query labels or built-in dynamic labels such as datname and query
If you use one-line inline metrics definitions, keep description values double-quoted to avoid YAML ambiguity
Core Configuration Elements
Collector Branch Name
The top-level key uniquely identifies a collector across the entire configuration:
pg_stat_database:# Must be uniquename:pg_db # Actual metric namespace
Query Definition
The SQL query that retrieves metrics:
query:| SELECT
datname,
numbackends,
xact_commit,
xact_rollback,
blks_read,
blks_hit
FROM pg_stat_database
WHERE datname NOT IN ('template0', 'template1')
Metric Types
Each column in the query result must be mapped to a metric type:
Usage
Description
Example
GAUGE
Instantaneous value that can go up or down
Current connections
COUNTER
Cumulative value that only increases
Total transactions
LABEL
Use as a Prometheus label
Database name
DISCARD
Ignore this column
Internal values
Cache Control (TTL)
The ttl parameter controls result caching:
# Fast queries - minimal cachingpg_stat_activity:ttl:1# Cache for 1 second# Expensive queries - longer cachingpg_table_bloat:ttl:3600# Cache for 1 hour
Best practices:
Set TTL less than your scrape interval
Use longer TTL for expensive queries
TTL of 0 disables caching
Timeout Control
Prevent queries from running too long:
timeout:0.1# 100ms defaulttimeout:1.0# 1 second for complex queriestimeout:-1# Disable timeout (not recommended)
Version Compatibility
Control which PostgreSQL versions can run this collector:
Version numbers follow PostgreSQL server_version_num rules:
100000 = 10.0
130200 = 13.2
160100 = 16.1
90600 = 9.6, relevant when using the legacy config bundle
Tag System
Tags control when and where collectors execute:
Built-in Tags
Tag
Description
cluster
Execute once per PostgreSQL cluster
primary / master
Only on primary servers
standby / replica
Only on replica servers
pgbouncer
Only for pgBouncer connections
Prefixed Tags
Prefix
Example
Description
dbname:
dbname:postgres
Only on specific database
username:
username:monitor
Only with specific user
extension:
extension:pg_stat_statements
Only if extension installed
schema:
schema:public
Only if schema exists
not:
not:slow
NOT when exporter has tag
Custom Tags
Pass custom tags to the exporter:
pg_exporter --tag="production,critical"
Then use in configuration:
expensive_metrics:tags:[critical] # Only runs with 'critical' tag
Predicate Queries
Execute conditional checks before main query:
predicate_queries:- name:"Check pg_stat_statements"predicate_query:| SELECT EXISTS (
SELECT 1 FROM pg_extension
WHERE extname = 'pg_stat_statements'
)
The main query only executes if all predicates return true.
Metric Definition
Basic Definition
metrics:- numbackends:usage:GAUGEdescription:"Number of backends connected"
Advanced Options
metrics:- checkpoint_write_time:usage:COUNTERrename:write_time # Rename metricscale:0.001# Convert ms to secondsdefault:0# Use 0 if NULLdescription:"Checkpoint write time in seconds"
Collector Organization
PG Exporter ships with pre-organized collectors:
Range
Category
Description
0xx
Documentation
Examples and documentation
1xx
Basic
Server info, settings, metadata
2xx
Replication
Replication, slots, receivers
3xx
Persistence
I/O, checkpoints, WAL
4xx
Activity
Connections, locks, queries
5xx
Progress
Vacuum, index creation progress
6xx
Database
Per-database statistics
7xx
Objects
Tables, indexes, functions
8xx
Optional
Expensive/optional metrics
9xx
pgBouncer
Connection pooler metrics
10xx+
Extensions
Extension-specific metrics
Real-World Examples
Simple Gauge Collector
pg_connections:desc:"Current database connections"query:| SELECT
count(*) as total,
count(*) FILTER (WHERE state = 'active') as active,
count(*) FILTER (WHERE state = 'idle') as idle,
count(*) FILTER (WHERE state = 'idle in transaction') as idle_in_transaction
FROM pg_stat_activity
WHERE pid != pg_backend_pid()ttl:1metrics:- total:{usage: GAUGE, description:"Total connections"}- active:{usage: GAUGE, description:"Active connections"}- idle:{usage: GAUGE, description:"Idle connections"}- idle_in_transaction:{usage: GAUGE, description:"Idle in transaction"}
pg_stat_statements_metrics:desc:"Query performance statistics"tags:[extension:pg_stat_statements]query:| SELECT
sum(calls) as total_calls,
sum(total_exec_time) as total_time,
sum(mean_exec_time * calls) / sum(calls) as mean_time
FROM pg_stat_statementsttl:60metrics:- total_calls:{usage:COUNTER}- total_time:{usage: COUNTER, scale:0.001}- mean_time:{usage: GAUGE, scale:0.001}
Custom Collectors
Creating Your Own Metrics
Create a new YAML file in your config directory:
# /etc/pg_exporter/custom_metrics.ymlapp_metrics:desc:"Application-specific metrics"query:| SELECT
(SELECT count(*) FROM users WHERE active = true) as active_users,
(SELECT count(*) FROM orders WHERE created_at > NOW() - '1 hour'::interval) as recent_orders,
(SELECT avg(processing_time) FROM jobs WHERE completed_at > NOW() - '5 minutes'::interval) as avg_job_timettl:30metrics:- active_users:{usage: GAUGE, description:"Currently active users"}- recent_orders:{usage: GAUGE, description:"Orders in last hour"}- avg_job_time:{usage: GAUGE, description:"Average job processing time"}
Test your collector:
pg_exporter --explain --config=/etc/pg_exporter/
Conditional Metrics
Use predicate queries for conditional metrics:
partition_metrics:desc:"Partitioned table metrics"predicate_queries:- name:"Check if partitioning is used"predicate_query:| SELECT EXISTS (
SELECT 1 FROM pg_class
WHERE relkind = 'p' LIMIT 1
)query:| SELECT
parent.relname as parent_table,
count(*) as partition_count,
sum(pg_relation_size(child.oid)) as total_size
FROM pg_inherits
JOIN pg_class parent ON parent.oid = pg_inherits.inhparent
JOIN pg_class child ON child.oid = pg_inherits.inhrelid
WHERE parent.relkind = 'p'
GROUP BY parent.relnamettl:300metrics:- parent_table:{usage:LABEL}- partition_count:{usage:GAUGE}- total_size:{usage:GAUGE}
Performance Optimization
Query Optimization Tips
Use appropriate TTL values:
Fast queries: 1-10 seconds
Medium queries: 10-60 seconds
Expensive queries: 300-3600 seconds
Set realistic timeouts:
Default: 100ms
Complex queries: 500ms-1s
Never disable timeout in production
Use cluster-level tags:
tags:[cluster] # Run once per cluster, not per database
Disable expensive collectors:
pg_table_bloat:skip:true# Disable if not needed
Monitoring Collector Performance
Check collector execution statistics:
# View collector statisticscurl http://localhost:9630/stat
# Check which collectors are slowcurl http://localhost:9630/metrics | grep pg_exporter_collector_duration
PG Exporter provides a comprehensive REST API for metrics collection, health checks, traffic routing, and operational control. All endpoints are exposed over HTTP/HTTPS on the configured port (default: 9630).
The primary endpoint that exposes all collected metrics in Prometheus format.
Request
curl http://localhost:9630/metrics
Response
# HELP pg_up last scrape was able to connect to the server: 1 for yes, 0 for no
# TYPE pg_up gauge
pg_up 1
# HELP pg_version server version number
# TYPE pg_version gauge
pg_version 140000
# HELP pg_in_recovery server is in recovery mode? 1 for yes 0 for no
# TYPE pg_in_recovery gauge
pg_in_recovery 0
# HELP pg_exporter_build_info A metric with a constant '1' value labeled with version, revision, branch, goversion, builddate, goos, and goarch from which pg_exporter was built.
# TYPE pg_exporter_build_info gauge
pg_exporter_build_info{version="1.2.2",branch="main",revision="<git-sha>",builddate="<build-date>",goversion="go1.26.2",goos="linux",goarch="amd64"} 1
# ... additional metrics
Response Format
Metrics follow the Prometheus exposition format:
# HELP <metric_name> <description>
# TYPE <metric_name> <type>
<metric_name>{<label_name>="<label_value>",...} <value> <timestamp>
Health Checks
Health endpoints provide multiple ways to monitor PG Exporter and the target database state.
GET /up
Simple aliveness check based on cached background probe state. It does not actively probe the database on every HTTP request.
Response Codes
Code
Status
Description
200
OK
Target is available (primary / replica)
503
Service Unavailable
Target is unavailable (down / starting / unknown)
Example
# Check whether the service is healthycurl -I http://localhost:9630/up
HTTP/1.1 200 OK
Content-Type: text/plain;charset=utf-8
These endpoints are designed for load balancers and proxies to route traffic based on server role.
GET /primary
Check whether the server is a primary instance.
Response Codes
Code
Status
Description
200
OK
Server is primary and accepting writes
404
Not Found
Server is not primary and is acting as a replica
503
Service Unavailable
Server is unavailable (down / starting / unknown)
Aliases
/leader
/master
/read-write
/rw
Example
# Check whether the server is primarycurl -I http://localhost:9630/primary
# Use in HAProxybackend pg_primary
option httpchk GET /primary
server pg1 10.0.0.1:5432 check port 9630 server pg2 10.0.0.2:5432 check port 9630
GET /replica
Check whether the server is a replica instance.
Response Codes
Code
Status
Description
200
OK
Server is a replica and in recovery
404
Not Found
Server is not a replica and is acting as primary
503
Service Unavailable
Server is unavailable (down / starting / unknown)
Aliases
/standby
/read-only
/ro
/slave remains compatible, but /replica is the preferred name.
Example
# Check whether the server is a replicacurl -I http://localhost:9630/replica
# Use in a load balancerbackend pg_replicas
option httpchk GET /replica
server pg2 10.0.0.2:5432 check port 9630 server pg3 10.0.0.3:5432 check port 9630
GET /read
Check whether the server can handle read traffic. Both primaries and replicas may return success.
Response Codes
Code
Status
Description
200
OK
Server is healthy and can handle reads
503
Service Unavailable
Server is unavailable (down / starting / unknown)
Example
# Check whether the server can serve readscurl -I http://localhost:9630/read
# Route reads to any healthy serverbackend pg_read
option httpchk GET /read
server pg1 10.0.0.1:5432 check port 9630 server pg2 10.0.0.2:5432 check port 9630 server pg3 10.0.0.3:5432 check port 9630
Operational Endpoints
GET /reload / POST /reload
Reload configuration without restarting the exporter.
Request
# POST is recommendedcurl -X POST http://localhost:9630/reload
# GET remains supported for compatibilitycurl http://localhost:9630/reload
Response
server reloaded
Response Codes
Code
Status
Description
200
OK
Reload completed successfully
500
Internal Server Error
Reload failed and returns fail to reload: ...
405
Method Not Allowed
Non-GET/POST request, with Allow: GET, POST
Use Cases
Update collector definitions
Change query parameters
Modify cache TTL values
Add or remove collectors
Note
Reload refreshes collector configuration and query plans. Process-level settings such as listen addresses and CLI arguments still require a restart.
Security Advice
/reload, /explain, and /stat are management endpoints. If the exporter is reachable beyond localhost or a trusted private network, protect them with --web.config.file or restrict access at the reverse proxy or firewall layer.
GET /explain
Display planned collector execution details for all configured collectors.
This endpoint is useful when identifying slow or problematic collectors.
Using with Load Balancers
HAProxy Example
# Primary backend for write traffic
backend pg_primary
mode tcp
option httpchk GET /primary
http-check expect status 200
server pg1 10.0.0.1:5432 check port 9630 inter 3000 fall 2 rise 2
server pg2 10.0.0.2:5432 check port 9630 inter 3000 fall 2 rise 2 backup
# Replica backend for read traffic
backend pg_replicas
mode tcp
balance roundrobin
option httpchk GET /replica
http-check expect status 200
server pg2 10.0.0.2:5432 check port 9630 inter 3000 fall 2 rise 2
server pg3 10.0.0.3:5432 check port 9630 inter 3000 fall 2 rise 2
# Read backend for any server that can handle reads
backend pg_read
mode tcp
balance leastconn
option httpchk GET /read
http-check expect status 200
server pg1 10.0.0.1:5432 check port 9630 inter 3000 fall 2 rise 2
server pg2 10.0.0.2:5432 check port 9630 inter 3000 fall 2 rise 2
server pg3 10.0.0.3:5432 check port 9630 inter 3000 fall 2 rise 2
This guide covers production deployment strategies, best practices, and real-world configurations.
pg_exporter itself can be configured through:
Command-line arguments with higher precedence
Environment variables with lower precedence
Metric collectors are configured through YAML config files or directories:
/etc/pg_exporter.yml by default
/etc/pg_exporter/ for a directory with multiple config files
The configuration file uses YAML and is composed of collector definitions that describe what to collect and how to collect it.
Deployment Design
These are the main production tradeoffs behind pg_exporter:
Local-first connectivity: the default URL is postgresql:///?sslmode=disable, which fits same-host deployments
Observable before connected: non-blocking startup is the default, so HTTP endpoints come up even when the target database is temporarily unavailable
Controllable failure mode: with --fail-fast, startup exits immediately if the target cannot be reached
Online changes: support hot reload through POST / GET /reload and SIGHUP, with extra SIGUSR1 support on non-Windows platforms
Decoupled health probes: /up and related endpoints use cached background probe state, so probe storms do not hammer the database
Shared management surface: /reload, /explain, and /stat are exposed on the same web listener by default, so protect them with --web.config.file or keep them inside trusted networks
Command-Line Flags
All configuration options can be specified through command-line flags:
Run pg_exporter --help for the full list of flags:
Flags:
-h, --[no-]help Show context-sensitive help(also try --help-long and --help-man).
-u, --url=URL postgres target url
-c, --config=CONFIG path to config dir or file
--web.listen-address=:9630 ...
Addresses on which to expose metrics and web interface. Repeatable for multiple addresses. Examples: `:9100` or `[::1]:9100`for http, `vsock://:9100`for vsock
--web.config.file="" Path to configuration file that can enable TLS or authentication. See: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md
-l, --label="" constant labels: comma separated list of label=value pair ($PG_EXPORTER_LABEL) -t, --tag="" tags,comma separated list of server tag ($PG_EXPORTER_TAG) -C, --[no-]disable-cache force not using cache ($PG_EXPORTER_DISABLE_CACHE) -m, --[no-]disable-intro disable internal/exporter self metrics (only expose query metrics)($PG_EXPORTER_DISABLE_INTRO) -a, --[no-]auto-discovery automatically scrape all database for given server ($PG_EXPORTER_AUTO_DISCOVERY) -x, --exclude-database="template0,template1,postgres" excluded databases when enabling auto-discovery ($PG_EXPORTER_EXCLUDE_DATABASE) -i, --include-database="" included databases when auto-discovery is enabled ($PG_EXPORTER_INCLUDE_DATABASE) -n, --namespace="" prefix of built-in metrics, (pg|pgbouncer) by default ($PG_EXPORTER_NAMESPACE) -f, --[no-]fail-fast fail fast instead of waiting during start-up ($PG_EXPORTER_FAIL_FAST) -T, --connect-timeout=100 connect timeout in ms, 100 by default ($PG_EXPORTER_CONNECT_TIMEOUT) -P, --web.telemetry-path="/metrics" URL path under which to expose metrics. ($PG_EXPORTER_TELEMETRY_PATH) -D, --[no-]dry-run dry run and print raw configs
-E, --[no-]explain explain server planned queries
--log.level="info" log level: debug|info|warn|error
--log.format="logfmt" log format: logfmt|json
--[no-]version Show application version.
Environment Variables
All command-line arguments have corresponding environment variables:
Besides PG_EXPORTER_URL, these URL-related variables are also supported:
PGURL as a compatibility environment variable for the connection URL
PG_EXPORTER_URL_FILE to read the connection URL from a file, which is useful with container secrets
Advice
If the exporter is exposed beyond localhost or a trusted private network, configure --web.config.file to protect both /metrics and the management endpoints with authentication and TLS. Otherwise, anyone who can reach the port can read /explain, inspect /stat, and trigger /reload.
Deployment Architecture
The simplest setup is one exporter per PostgreSQL instance:
Create a dedicated monitoring user with the minimum required privileges:
-- Create monitoring role
CREATEROLEmonitorWITHLOGINPASSWORD'strong_password'CONNECTIONLIMIT5;-- Grant necessary permissions
GRANTpg_monitorTOmonitor;-- PostgreSQL 10+ built-in role
GRANTCONNECTONDATABASEpostgresTOmonitor;-- For specific databases
GRANTCONNECTONDATABASEapp_dbTOmonitor;GRANTUSAGEONSCHEMApublicTOmonitor;-- Additional privileges for extended monitoring
GRANTSELECTONALLTABLESINSCHEMApg_catalogTOmonitor;GRANTSELECTONALLSEQUENCESINSCHEMApg_catalogTOmonitor;
[Unit]Description=Prometheus exporter for PostgreSQL/Pgbouncer server metricsDocumentation=https://pigsty.io/docs/pg_exporterAfter=network.target[Service]EnvironmentFile=-/etc/default/pg_exporterUser=prometheusExecStart=/usr/bin/pg_exporter $PG_EXPORTER_OPTSRestart=on-failure[Install]WantedBy=multi-user.target
This /etc/default/pg_exporter example reflects the packaged environment file. You can append extra variables such as PG_EXPORTER_DISABLE_INTRO=false if needed. If PG_EXPORTER_URL is omitted entirely, the binary itself still falls back to the local-first default postgresql:///?sslmode=disable.
If the container connects to PostgreSQL over remote TLS, mount sslrootcert or a system CA bundle explicitly. The current official image is built from scratch, so it does not ship with a generic system trust store.
groups:- name:pg_exporterrules:- alert:PgExporterDownexpr:up{job="postgresql"} == 0for:5mlabels:severity:criticalannotations:summary:"PG Exporter is down"description:"PG Exporter on {{ $labels.instance }} has been down for more than 5 minutes"- alert:PostgreSQLDownexpr:pg_up == 0for:1mlabels:severity:criticalannotations:summary:"PostgreSQL connection failed"description:"Unable to connect to PostgreSQL on {{ $labels.instance }}"- alert:PgExporterSlowScrapeexpr:pg_exporter_scrape_duration > 30for:5mlabels:severity:warningannotations:summary:"PG Exporter scrape is slow"description:"Scrape duration on {{ $labels.instance }} has exceeded 30 seconds"
6 - Release Notes
The latest stable version of pg_exporter is v1.2.2
v1.2.2 is a routine maintenance release that only refreshes the release toolchain to Go 1.26.2. It does not introduce new collectors, config semantics, or runtime behavior changes.
Highlights
Refresh the release toolchain: bump release builds to Go 1.26.2
No functional changes: collector behavior, default configs, metric definitions, and runtime semantics remain unchanged
v1.2.1 is a lightweight maintenance release focused on release engineering, config package consistency, and documentation/metadata refresh. It does not introduce new collector semantics or runtime behavior changes.
Highlights
Refresh the build toolchain: bump both release workflows and Docker build images to Go 1.26.1
Standardize config style: switch inline description values in both current and legacy configs to double-quoted form, and regenerate merged pg_exporter.yml / legacy/pg_exporter.yml
Add config consistency tests: verify split and merged configs remain equivalent, and check inline metric description style to reduce configuration drift
Refresh packaging metadata: update RPM / DEB support descriptions to PostgreSQL 9.x - 18+ and pgBouncer 1.8 - 1.25+, and refresh Pigsty documentation links
v1.2.0 is a stability-and-compatibility focused minor release across startup flow, hot reload, health probing, config validation, and legacy support.
New Features:
Add robust hot reload workflow: support platform-specific reload signals (SIGHUP / SIGUSR1) and strengthen POST /reload to refresh configs and query plans without process restart
Switch startup to non-blocking mode: HTTP endpoints come up first even when target precheck fails, making recovery and monitoring integration smoother
Add PostgreSQL 9.1-9.6 legacy config bundle: provide legacy/ configs and a make conf9 target for easier onboarding of EOL PostgreSQL versions
Rework health probing architecture: use cached health snapshots with periodic probes for more consistent role-based health endpoints and smoother reload behavior
Improve release engineering baseline: run go test and go vet in release workflows and bump build toolchain to Go 1.26.0
Bug Fixes:
Fix multiple config parsing edge cases: reject malformed metrics entries, return explicit errors when config dirs fail to load valid YAML, and harden runtime fallbacks
Fix CLI bool flag parsing to correctly handle --flag=false style arguments
Fix /explain output/rendering behavior by adjusting content type handling and using safer template rendering
Change min_version from 9.6 to 10, explicit ::int type casting
pg_size: Fix log directory size detection, use logging_collector check instead of path pattern matching
pg_table: Performance optimization, replace LATERAL subqueries with JOIN for better query performance; fix tuples and frozenxid metric type from COUNTER to GAUGE; increase timeout from 1s to 2s
pg_vacuuming: Add PG17 collector branch with new metrics indexes_total, indexes_processed, dead_tuple_bytes for index vacuum progress tracking
pg_query: Increase timeout from 1s to 2s for high-load scenarios
Remove the monitor schema requirement for pg_query collectors (you have to ensure it with search_path or just
install pg_stat_statements in the default public schema)
Fix pgbouncer version parsing message level from info to debug