aws_s3
aws_s3 postgres extension to import/export data from/to s3
Repository
chimpler/postgres-aws-s3
https://github.com/chimpler/postgres-aws-s3
Source
aws_s3-0.0.1.tar.gz
aws_s3-0.0.1.tar.gz
Overview
| Package | Version | Category | License | Language |
|---|---|---|---|---|
aws_s3 | 0.0.1 | FDW | Apache-2.0 | SQL |
| ID | Extension | Bin | Lib | Load | Create | Trust | Reloc | Schema |
|---|---|---|---|---|---|---|---|---|
| 8800 | aws_s3 | No | No | No | Yes | No | Yes | - |
| Related | pg_parquet hdfs_fdw file_fdw duckdb_fdw wrappers pg_bulkload columnar pg_analytics |
|---|
Version
| Type | Repo | Version | PG Ver | Package | Deps |
|---|---|---|---|---|---|
| EXT | PIGSTY | 0.0.1 | 1817161514 | aws_s3 | - |
| RPM | PIGSTY | 0.0.1 | 1817161514 | aws_s3_$v | - |
| DEB | PIGSTY | 0.0.1 | 1817161514 | postgresql-$v-aws-s3 | - |
Build
You can build the RPM / DEB packages for aws_s3 using pig build:
pig build pkg aws_s3 # build RPM / DEB packages
Install
You can install aws_s3 directly. First, make sure the PGDG and PIGSTY repositories are added and enabled:
pig repo add pgsql -u # Add repo and update cache
Install the extension using pig or apt/yum/dnf:
pig install aws_s3; # Install for current active PG version
pig ext install -y aws_s3 -v 18 # PG 18
pig ext install -y aws_s3 -v 17 # PG 17
pig ext install -y aws_s3 -v 16 # PG 16
pig ext install -y aws_s3 -v 15 # PG 15
pig ext install -y aws_s3 -v 14 # PG 14
dnf install -y aws_s3_18 # PG 18
dnf install -y aws_s3_17 # PG 17
dnf install -y aws_s3_16 # PG 16
dnf install -y aws_s3_15 # PG 15
dnf install -y aws_s3_14 # PG 14
apt install -y postgresql-18-aws-s3 # PG 18
apt install -y postgresql-17-aws-s3 # PG 17
apt install -y postgresql-16-aws-s3 # PG 16
apt install -y postgresql-15-aws-s3 # PG 15
apt install -y postgresql-14-aws-s3 # PG 14
Create Extension:
CREATE EXTENSION aws_s3;
Usage
aws_s3: PostgreSQL extension to import/export data from/to S3
Setup Credentials
Configure AWS credentials via PostgreSQL session variables:
SET aws_s3.access_key_id TO 'your_access_key';
SET aws_s3.secret_key TO 'your_secret_key';
SET aws_s3.session_token TO 'optional_session_token'; -- if using temporary credentials
For local development with LocalStack:
SET aws_s3.endpoint_url TO 'http://localhost:4566';
Import Data from S3
CREATE EXTENSION aws_s3;
CREATE TABLE animals (
name text,
age int
);
SELECT aws_s3.table_import_from_s3(
'animals',
'',
'(FORMAT CSV, DELIMITER '','', HEADER true)',
'my-bucket',
'animals.csv',
'us-east-1'
);
SELECT * FROM animals;
Parameters: table name, column list (empty string for all), COPY options, S3 bucket, S3 key, AWS region.
Export Data to S3
SELECT * FROM aws_s3.query_export_to_s3(
'SELECT * FROM animals',
'my-bucket',
'export/animals.csv',
'us-east-1',
options := 'FORMAT CSV, HEADER true'
);
Parameters: SQL query, S3 bucket, S3 key, AWS region, COPY options.
Features
- Files with
Content-Encoding=gzipmetadata are automatically decompressed during import - Credentials can be passed directly as function arguments or via session variables
- Uses the same COPY format options as PostgreSQL (CSV, TEXT, BINARY, with all related settings like DELIMITER, HEADER, NULL, etc.)
Feedback
Was this page helpful?
Thanks for the feedback! Please let us know how we can improve.
Sorry to hear that. Please let us know how we can improve.