pg_incremental
Overview
| Package | Version | Category | License | Language |
|---|---|---|---|---|
pg_incremental | 1.5.0 | FEAT | PostgreSQL | C |
| ID | Extension | Bin | Lib | Load | Create | Trust | Reloc | Schema |
|---|---|---|---|---|---|---|---|---|
| 2850 | pg_incremental | No | Yes | No | Yes | No | No | pg_catalog |
| Related | age hll rum pg_graphql pg_jsonschema jsquery pg_hint_plan |
|---|
pg_cron is optional since v1.3 and only required for scheduled pipelines.
Version
| Type | Repo | Version | PG Ver | Package | Deps |
|---|---|---|---|---|---|
| EXT | PIGSTY | 1.5.0 | 1817161514 | pg_incremental | - |
| RPM | PIGSTY | 1.5.0 | 1817161514 | pg_incremental_$v | - |
| DEB | PIGSTY | 1.5.0 | 1817161514 | postgresql-$v-pg-incremental | - |
Build
You can build the RPM / DEB packages for pg_incremental using pig build:
pig build pkg pg_incremental # build RPM / DEB packages
Install
You can install pg_incremental directly. First, make sure the PGDG and PIGSTY repositories are added and enabled:
pig repo add pgsql -u # Add repo and update cache
Install the extension using pig or apt/yum/dnf:
pig install pg_incremental; # Install for current active PG version
pig ext install -y pg_incremental -v 18 # PG 18
pig ext install -y pg_incremental -v 17 # PG 17
pig ext install -y pg_incremental -v 16 # PG 16
dnf install -y pg_incremental_18 # PG 18
dnf install -y pg_incremental_17 # PG 17
dnf install -y pg_incremental_16 # PG 16
apt install -y postgresql-18-pg-incremental # PG 18
apt install -y postgresql-17-pg-incremental # PG 17
apt install -y postgresql-16-pg-incremental # PG 16
Create Extension:
CREATE EXTENSION pg_incremental;
Usage
- Sources: README, v1.5.0 release
pg_incremental defines exactly-once incremental pipelines for append-only tables and file feeds. Upstream documents three pipeline types: sequence, time-interval, and file-list.
Install And Scheduling Model
The upstream README still documents pg_cron-backed scheduling and installs with:
CREATE EXTENSION pg_incremental CASCADE;
Pipelines run immediately when created unless execute_immediately := false, then continue on a pg_cron schedule. The README notes that each scheduled execution appears in cron.job_run_details even when no new data is available.
Sequence Pipelines
Use sequence pipelines to process safe ranges of sequence values:
SELECT incremental.create_sequence_pipeline('event-aggregation', 'events', $$
INSERT INTO events_agg
SELECT date_trunc('day', event_time), count(*)
FROM events
WHERE event_id BETWEEN $1 AND $2
GROUP BY 1
ON CONFLICT (day) DO UPDATE
SET event_count = events_agg.event_count + excluded.event_count
$$);
The README documents max_batch_size for limiting how many sequence IDs are processed per run.
Time-Interval Pipelines
Use time windows when the command should receive $1 and $2 as a passed interval:
SELECT incremental.create_time_interval_pipeline('event-aggregation', '1 day', $$
INSERT INTO events_agg
SELECT event_time::date, count(DISTINCT event_id)
FROM events
WHERE event_time >= $1 AND event_time < $2
GROUP BY 1
$$);
For export-style jobs, the README documents batched := false so each interval runs separately.
File-List Pipelines
Use file-list pipelines to process newly discovered files:
SELECT incremental.create_file_list_pipeline('event-import', 's3://mybucket/events/*.csv', $$
SELECT import_events($1)
$$);
The v1.5.0 release adds max_batches_per_run to file-list pipelines. The README documents incremental.skip_file() for permanently marking a bad file as processed.
Operations And Monitoring
The README documents:
CALL incremental.execute_pipeline(name): run once if new work exists.SELECT incremental.reset_pipeline(name): reset progress.SELECT incremental.drop_pipeline(name): remove a pipeline.- Views and tables such as
incremental.sequence_pipelines,incremental.time_interval_pipelines,incremental.file_list_pipelines, andincremental.processed_files.
The v1.5.0 release note also calls out a DROP EXTENSION fix for environments where pg_cron is not present.
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.