timeseries
Overview
| Package | Version | Category | License | Language |
|---|---|---|---|---|
pg_timeseries | 0.2.0 | TIME | PostgreSQL | SQL |
| ID | Extension | Bin | Lib | Load | Create | Trust | Reloc | Schema |
|---|---|---|---|---|---|---|---|---|
| 1020 | timeseries | No | No | No | Yes | No | No | - |
| Related | pg_cron pg_partman timescaledb timescaledb_toolkit periods temporal_tables emaj table_version pg_task pg_later |
|---|
Version
| Type | Repo | Version | PG Ver | Package | Deps |
|---|---|---|---|---|---|
| EXT | PIGSTY | 0.2.0 | 1817161514 | pg_timeseries | pg_cron, pg_partman |
| RPM | PIGSTY | 0.2.0 | 1817161514 | pg_timeseries_$v | pg_cron_$v, pg_partman_$v |
| DEB | PIGSTY | 0.2.0 | 1817161514 | postgresql-$v-pg-timeseries | postgresql-$v-cron, postgresql-$v-partman |
Build
You can build the RPM / DEB packages for pg_timeseries using pig build:
pig build pkg pg_timeseries # build RPM / DEB packages
Install
You can install pg_timeseries directly. First, make sure the PGDG and PIGSTY repositories are added and enabled:
pig repo add pgsql -u # Add repo and update cache
Install the extension using pig or apt/yum/dnf:
pig install pg_timeseries; # Install for current active PG version
pig ext install -y pg_timeseries -v 18 # PG 18
pig ext install -y pg_timeseries -v 17 # PG 17
pig ext install -y pg_timeseries -v 16 # PG 16
pig ext install -y pg_timeseries -v 15 # PG 15
pig ext install -y pg_timeseries -v 14 # PG 14
dnf install -y pg_timeseries_18 # PG 18
dnf install -y pg_timeseries_17 # PG 17
dnf install -y pg_timeseries_16 # PG 16
dnf install -y pg_timeseries_15 # PG 15
dnf install -y pg_timeseries_14 # PG 14
apt install -y postgresql-18-pg-timeseries # PG 18
apt install -y postgresql-17-pg-timeseries # PG 17
apt install -y postgresql-16-pg-timeseries # PG 16
apt install -y postgresql-15-pg-timeseries # PG 15
apt install -y postgresql-14-pg-timeseries # PG 14
Create Extension:
CREATE EXTENSION timeseries CASCADE; -- requires: pg_cron, pg_partman
Usage
This extension provides a cohesive user experience around the creation, maintenance, and use of time-series tables.
Getting Started
Assuming you already have a partitioned table created, simply call the enable_ts_table function with your table name.
CREATE EXTENSION timeseries CASCADE;
SELECT enable_ts_table('sensor_readings');
With this one call, several things will happen:
- The table will be restructured as a series of partitions using PostgreSQL’s native PARTITION features
- Each partition covers a particular range of time (one week by default)
- New partitions will be created for some time in the future (one month by default)
- Once an hour, a maintenance job will create any missing partitions as well as needed future ones
Using Your Tables
Indexes
The time-series tables you create start out life as little more than typical partitioned PostgreSQL tables. All of PostgreSQL’s existing functionality will “just work” with them.
Traditional B-Tree indexes work well for time-series data, but you may wish to benchmark BRIN indexes as well, as they may perform better in specific query scenarios (often queries with many results). Start with B-Tree if you don’t anticipate more than a million records in each partition (by default, partitions are one week long).
Partition Sizing
Because calculating the total size of partitioned tables can be tedious, this extension provides several easy-to-use views surfacing this information.
To examine the table (data), index, and total size for each of your partitions, query the time-series partition information view, ts_part_info. A general rule of thumb is that each partition should be able to fit within roughly one quarter of your available memory.
Retention
Call set_ts_retention_policy with your time-series table and an interval (say, '90 days') to establish a retention policy. Once an hour, any partitions falling entirely outside the retention window will be dropped. Use clear_ts_retention_policy to revert to the default behavior (infinite retention). Each of these functions will return the previous retention policy when called.
Compression
By calling set_ts_compression_policy on a time-series table with an appropriate interval (perhaps '1 month'), this extension will compress partitions (using a columnar storage method) older than the specified interval, once an hour. A function is also provided for clearing any existing policy (existing partitions will not be decompressed, however).
The compression features depend on the citus and citus_columnar extensions:
CREATE EXTENSION citus;
CREATE EXTENSION citus_columnar;
Analytics Helpers
first and last
These two functions help clean up the syntax of a fairly common pattern: a query is grouped by one dimension, but a user wants to know what the first or last row in a group is when ordered by a different dimension.
SELECT machine_id,
last(cpu_util, recorded_at)
FROM events
GROUP BY machine_id;
date_bin_table
This function automates the tedium of aligning time-series values to a given width, or “stride”, and makes sure to include NULL rows for any time periods where the source table has no data points.
SELECT * FROM date_bin_table(NULL::target_table, '1 hour', '[2024-02-01 00:00, 2024-02-02 15:00]');
The output of this query will differ from simply hitting the target table directly in three ways:
- Rows will be sorted by time, ascending
- The time column’s values will be binned to the provided width
- Extra rows will be added for periods with no data. They will include the time stamp for that bin and NULL in all other columns
make_view_incremental
This function accepts a view and converts it into a materialized view which is kept up-to-date after every modification. This removes the need for users to pick between always up-to-date VIEWs and having to call REFRESH on MATERIALIZED VIEWs.
The underlying functionality is provided by a fork of pg_ivm. Enable the pg_ivm extension if you want to use this feature:
CREATE EXTENSION pg_ivm;
Requirements
Optional Dependencies
- pg_ivm — for incremental materialized views
- Citus & Citus Columnar — for compression features
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.