Manage PostgreSQL Cron Jobs
Pigsty uses crontab to manage scheduled tasks for routine backups, freezing aging transactions, and reorganizing bloated tables and indexes.
Quick Reference
| Operation | Quick Command | Description |
|---|---|---|
| Configure Cron Jobs | ./pgsql.yml -t pg_crontab -l <cls> | Apply pg_crontab config |
| View Cron Jobs | crontab -l | View as postgres user |
| Physical Backup | pg-backup [full|diff|incr] | Execute backup with pgBackRest |
| Transaction Freeze | pg-vacuum [database...] | Freeze aging transactions, prevent XID wraparound |
| Bloat Maintenance | pg-repack [database...] | Online reorganize bloated tables and indexes |
For other management tasks, see: Backup Management, Monitoring System, HA Management.
Configure Cron Jobs
Use the pg_crontab parameter to configure cron jobs for the PostgreSQL database superuser (pg_dbsu, default postgres).
Example Configuration
The following pg-meta cluster configures a daily full backup at 1:00 AM, while pg-test configures weekly full backup on Monday with incremental backups on other days.
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_crontab:
- '00 01 * * * /pg/bin/pg-backup'
pg-test:
hosts:
10.10.10.11: { pg_seq: 1, pg_role: primary }
10.10.10.12: { pg_seq: 2, pg_role: replica }
vars:
pg_cluster: pg-test
pg_crontab:
- '00 01 * * 1 /pg/bin/pg-backup full'
- '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup'
Recommended Maintenance Schedule
pg_crontab:
- '00 01 * * * /pg/bin/pg-backup full' # Daily full backup at 1:00 AM
- '00 03 * * 0 /pg/bin/pg-vacuum' # Weekly vacuum freeze on Sunday at 3:00 AM
- '00 04 * * 1 /pg/bin/pg-repack' # Weekly repack on Monday at 4:00 AM
| Task | Frequency | Timing | Description |
|---|---|---|---|
pg-backup | Daily | Early morning | Full or incremental backup, depending on business needs |
pg-vacuum | Weekly | Sunday early morning | Freeze aging transactions, prevent XID wraparound |
pg-repack | Weekly/Monthly | Off-peak hours | Reorganize bloated tables/indexes, reclaim space |
The pg-backup, pg-vacuum, and pg-repack scripts automatically detect the current node role. Only the primary will actually execute; replicas will exit directly. Therefore, you can safely configure the same cron jobs on all nodes, and after failover, the new primary will automatically continue executing maintenance tasks.
Apply Cron Jobs
Cron jobs are automatically written to the default location for the corresponding OS distribution when the pgsql.yml playbook executes (the pg_crontab task):
- EL (RHEL/Rocky/Alma):
/var/spool/cron/postgres - Debian/Ubuntu:
/var/spool/cron/crontabs/postgres
./pgsql.yml -l pg-meta -t pg_crontab # Apply pg_crontab config to specified cluster
./pgsql.yml -l 10.10.10.10 -t pg_crontab # Target specific host only
# Edit cron jobs as postgres user
sudo -u postgres crontab -e
# Or edit crontab file directly
sudo vi /var/spool/cron/postgres # EL series
sudo vi /var/spool/cron/crontabs/postgres # Debian/Ubuntu
Each playbook execution will fully overwrite the cron job configuration.
View Cron Jobs
Execute the following command as the pg_dbsu OS user to view cron jobs:
crontab -l
# Pigsty Managed Crontab for postgres
SHELL=/bin/bash
PATH=/usr/pgsql/bin:/pg/bin:/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin
MAILTO=""
00 01 * * * /pg/bin/pg-backup
If you’re not familiar with crontab syntax, refer to Crontab Guru for explanations.
pg-backup
pg-backup is Pigsty’s physical backup script based on pgBackRest, supporting full, differential, and incremental backup modes.
Basic Usage
pg-backup # Execute incremental backup (default), auto full if no existing full backup
pg-backup full # Execute full backup
pg-backup diff # Execute differential backup (based on most recent full backup)
pg-backup incr # Execute incremental backup (based on most recent any backup)
Backup Types
| Type | Parameter | Description |
|---|---|---|
| Full Backup | full | Complete backup of all data, only this backup needed for recovery |
| Differential | diff | Backup changes since last full backup, recovery needs full + diff |
| Incremental | incr | Backup changes since last any backup, recovery needs complete chain |
Execution Requirements
- Script must run on primary as postgres user
- Script auto-detects current node role, exits (exit 1) when run on replica
- Auto-retrieves stanza name from
/etc/pgbackrest/pgbackrest.conf
Common Cron Configurations
pg_crontab:
- '00 01 * * * /pg/bin/pg-backup full' # Daily full backup at 1:00 AM
pg_crontab:
- '00 01 * * 1 /pg/bin/pg-backup full' # Monday full backup
- '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup' # Other days incremental
pg_crontab:
- '00 01 * * 1 /pg/bin/pg-backup full' # Monday full backup
- '00 01 * * 2,3,4,5,6,7 /pg/bin/pg-backup diff' # Other days differential
For more backup and recovery operations, see the Backup Management section.
pg-vacuum
pg-vacuum is Pigsty’s transaction freeze script for executing VACUUM FREEZE operations to prevent database shutdown from transaction ID (XID) wraparound.
Basic Usage
pg-vacuum # Freeze aging tables in all databases
pg-vacuum mydb # Process specified database only
pg-vacuum mydb1 mydb2 # Process multiple databases
pg-vacuum -n mydb # Dry run mode, display only without executing
pg-vacuum -a 80000000 mydb # Use custom age threshold (default 100M)
pg-vacuum -r 50 mydb # Use custom aging ratio threshold (default 40%)
-- Execute VACUUM FREEZE on entire database
VACUUM FREEZE;
-- Execute VACUUM FREEZE on specific table
VACUUM FREEZE schema.table_name;
Command Options
| Option | Description | Default |
|---|---|---|
-h, --help | Show help message | - |
-n, --dry-run | Dry run mode, display only | false |
-a, --age | Age threshold, tables exceeding need freeze | 100000000 |
-r, --ratio | Aging ratio threshold, full freeze if exceeded (%) | 40 |
Logic
- Check database
datfrozenxidage, skip database if below threshold - Calculate aging page ratio (percentage of table pages exceeding age threshold of total pages)
- If aging ratio > 40%, execute full database
VACUUM FREEZE ANALYZE - Otherwise, only execute
VACUUM FREEZE ANALYZEon tables exceeding age threshold
Script sets vacuum_cost_limit = 10000 and vacuum_cost_delay = 1ms to control I/O impact.
Execution Requirements
- Script must run on primary as postgres user
- Uses file lock
/tmp/pg-vacuum.lockto prevent concurrent execution - Auto-skips
template0,template1,postgressystem databases
Common Cron Configuration
pg_crontab:
- '00 03 * * 0 /pg/bin/pg-vacuum' # Weekly Sunday at 3:00 AM
pg-repack
pg-repack is Pigsty’s bloat maintenance script based on the pg_repack extension for online reorganization of bloated tables and indexes.
Basic Usage
pg-repack # Reorganize bloated tables and indexes in all databases
pg-repack mydb # Reorganize specified database only
pg-repack mydb1 mydb2 # Reorganize multiple databases
pg-repack -n mydb # Dry run mode, display only without executing
pg-repack -t mydb # Reorganize tables only
pg-repack -i mydb # Reorganize indexes only
pg-repack -T 30 -j 4 mydb # Custom lock timeout (seconds) and parallelism
# Use pg_repack command directly to reorganize specific table
pg_repack dbname -t schema.table
# Use pg_repack command directly to reorganize specific index
pg_repack dbname -i schema.index
Command Options
| Option | Description | Default |
|---|---|---|
-h, --help | Show help message | - |
-n, --dry-run | Dry run mode, display only | false |
-t, --table | Reorganize tables only | false |
-i, --index | Reorganize indexes only | false |
-T, --timeout | Lock wait timeout (seconds) | 10 |
-j, --jobs | Parallel jobs | 2 |
Auto-Selection Thresholds
Script auto-selects objects to reorganize based on table/index size and bloat ratio:
Table Bloat Thresholds
| Size Range | Bloat Threshold | Max Count |
|---|---|---|
| < 256MB | > 40% | 64 |
| 256MB - 2GB | > 30% | 16 |
| 2GB - 8GB | > 20% | 4 |
| 8GB - 64GB | > 15% | 1 |
Index Bloat Thresholds
| Size Range | Bloat Threshold | Max Count |
|---|---|---|
| < 128MB | > 40% | 64 |
| 128MB - 1GB | > 35% | 16 |
| 1GB - 8GB | > 30% | 4 |
| 8GB - 64GB | > 20% | 1 |
Tables/indexes over 64GB are skipped with a warning and require manual handling.
Execution Requirements
- Script must run on primary as postgres user
- Requires
pg_repackextension installed (installed by default in Pigsty) - Requires
pg_table_bloatandpg_index_bloatviews inmonitorschema - Uses file lock
/tmp/pg-repack.lockto prevent concurrent execution - Auto-skips
template0,template1,postgressystem databases
Normal reads/writes are not affected during reorganization, but the final switch moment requires acquiring AccessExclusive lock on the table, blocking all access. For high-throughput workloads, recommend running during off-peak hours or maintenance windows.
Common Cron Configuration
pg_crontab:
- '00 04 * * 1 /pg/bin/pg-repack' # Weekly Monday at 4:00 AM
You can confirm database bloat through Pigsty’s PGCAT Database - Table Bloat panel and select high-bloat tables and indexes for reorganization.
For more details see: Managing Relation Bloat
Remove Cron Jobs
When using the pgsql-rm.yml playbook to remove a PostgreSQL cluster, it automatically deletes the postgres user’s crontab file.
./pgsql-rm.yml -l <cls> -t pg_crontab # Remove cron jobs only
./pgsql-rm.yml -l <cls> # Remove entire cluster (including cron jobs)
Related Documentation
- Backup Management: PostgreSQL backup and recovery
- Monitoring System: PostgreSQL monitoring and alerting
- Cluster Management: Cluster creation, scaling, and teardown
- Patroni Management: HA cluster management
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.