Self-Hosting Supabase on PostgreSQL
Module:
Supabase is great, own your own Supabase is even better. Here’s a comprehensive tutorial for self-hosting production-grade supabase on local/cloud VM/BMs.
curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./bootstrap # install ansible
./configure -c app/supa # use supabase config (please CHANGE CREDENTIALS in pigsty.yml)
vi pigsty.yml # edit domain name, password, keys,...
./install.yml # install pigsty
./docker.yml # install docker compose
./app.yml # launch supabase stateless part with docker
What is Supabase?
Supabase is an open-source Firebase alternative, a Backend as a Service (BaaS).
Supabase wraps PostgreSQL kernel and vector extensions, alone with authentication, realtime subscriptions, edge functions, object storage, and instant REST and GraphQL APIs from your postgres schema. It let you skip most backend work, requiring only database design and frontend skills to ship quickly.
Currently, Supabase may be the most popular open-source project in the PostgreSQL ecosystem, boasting over 74,000 stars on GitHub. And become quite popular among developers, and startups, since they have a generous free plan, just like cloudflare & neon.
Why Self-Hosting?
Supabase’s slogan is: “Build in a weekend, Scale to millions”. It has great cost-effectiveness in small scales (4c8g) indeed. But there is no doubt that when you really grow to millions of users, some may choose to self-hosting their own Supabase —— for functionality, performance, cost, and other reasons.
That’s where Pigsty comes in. Pigsty provides a complete one-click self-hosting solution for Supabase. Self-hosted Supabase can enjoy full PostgreSQL monitoring, IaC, PITR, and high availability capability,
You can run the latest PostgreSQL 17(,16,15) kernels, (supabase is using the 15 currently), alone with 404 PostgreSQL extensions out-of-the-box. Run on mainstream Linus OS distros with production grade HA PostgreSQL, MinIO, Prometheus & Grafana Stack for observability, and Nginx for reverse proxy.
Since most of the supabase maintained extensions are not available in the official PGDG repo, we have compiled all the RPM/DEBs for these extensions and put them in the Pigsty repo:
- pg_graphql: GraphQL support in PostgreSQL (Rust extension via PIGSTY)
- pg_jsonschema: JSON Schema validation (Rust extension via PIGSTY)
- wrappers: Supabase’s foreign data source wrappers bundle (Rust extension via PIGSTY)
- index_advisor: Query index advisor (SQL extension via PIGSTY)
- pg_net: Async non-blocking HTTP/HTTPS requests in SQL (C extension via PIGSTY)
- vault: Store encrypted credentials in Vault (C extension via PIGSTY)
- pgjwt: PostgreSQL implementation of JSON Web Token API (SQL extension via PIGSTY)
- pgsodium: Table data encryption storage TDE (C extension via PIGSTY)
- supautils: Secure database clusters in cloud environments (C extension via PIGSTY)
- pg_plan_filter: Block specific queries using execution plan cost filtering (C extension via PIGSTY)
Everything is under your control, you have the ability and freedom to scale PGSQL, MinIO, and Supabase itself. And take full advantage of the performance and cost advantages of modern hardware like Gen5 NVMe SSD.
All you need is prepare a VM with several commands and wait for 10 minutes….
Get Started
First, download & install pigsty as usual, with the supa
config template:
curl -fsSL https://repo.pigsty.io/get | bash; cd ~/pigsty
./bootstrap # install ansible
./configure -c app/dify # use dify config (please CHANGE CREDENTIALS in pigsty.yml)
vi pigsty.yml # edit config file
./install.yml # install pigsty
Please change the
pigsty.yml
config file according to your need before deploying Supabase. (Credentials) For dev/test/demo purposes, we will just skip that, and comes back later.
Then, run the docker.yml
& app.yml
to launch stateless part of supabase.
./docker.yml # install docker compose
./app.yml # launch dify stateless part with docker
You can access the supabase API / Web UI through the 8000/8443
directly.
with configured DNS, or a local /etc/hosts
entry, you can also use the default supa.pigsty
domain name via the 80/443 infra portal.
Credentials for Supabase Studio:
supabase
:pigsty
Architecture
Pigsty’s supabase is based on the Supabase Docker Compose Template, with some slight modifications to fit-in Pigsty’s default ACL model.
The stateful part of this template is replaced by Pigsty’s managed PostgreSQL cluster and MinIO cluster. The container part are stateless, so you can launch / destroy / run multiple supabase containers on the same stateful PGSQL / MINIO cluster simultaneously to scale out.
The built-in supa.yml
config template will create a single-node supabase, with a singleton PostgreSQL and SNSD MinIO server.
You can use Multinode PostgreSQL Clusters and MNMD MinIO Clusters / external S3 service instead in production, we will cover that later.
Config Detail
Here are checklists for self-hosting
- Hardware: necessary VM/BM resources, one node at least, 3-4 are recommended for HA.
- Linux OS: Linux x86_64 server with fresh installed Linux, check compatible distro
- Network: Static IPv4 address which can be used as node identity
- Admin User: nopass ssh & sudo are recommended for admin user
- Conf Template: Use the
supa
config template, if you don’t know how to manually configure pigsty
The built-in conf/app/supa.yml
config template is shown below.
The supa Config Template
all:
children:
# the supabase stateless (default username & password: supabase/pigsty)
supa:
hosts:
10.10.10.10: {}
vars:
app: supabase # specify app name (supa) to be installed (in the apps)
apps: # define all applications
supabase: # the definition of supabase app
conf: # override /opt/supabase/.env
# IMPORTANT: CHANGE JWT_SECRET AND REGENERATE CREDENTIAL ACCORDING!!!!!!!!!!!
# https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_ROLE_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
DASHBOARD_USERNAME: supabase
DASHBOARD_PASSWORD: pigsty
# postgres connection string (use the correct ip and port)
POSTGRES_HOST: 10.10.10.10 # point to the local postgres node
POSTGRES_PORT: 5436 # access via the 'default' service, which always route to the primary postgres
POSTGRES_DB: postgres # the supabase underlying database
POSTGRES_PASSWORD: DBUser.Supa # password for supabase_admin and multiple supabase users
# expose supabase via domain name
SITE_URL: https://supa.pigsty # <------- Change This to your external domain name
API_EXTERNAL_URL: https://supa.pigsty # <------- Otherwise the storage api may not work!
SUPABASE_PUBLIC_URL: https://supa.pigsty # <------- DO NOT FORGET TO PUT IT IN infra_portal!
# if using s3/minio as file storage
S3_BUCKET: supa
S3_ENDPOINT: https://sss.pigsty:9000
S3_ACCESS_KEY: supabase
S3_SECRET_KEY: S3User.Supabase
S3_FORCE_PATH_STYLE: true
S3_PROTOCOL: https
S3_REGION: stub
MINIO_DOMAIN_IP: 10.10.10.10 # sss.pigsty domain name will resolve to this ip statically
# if using SMTP (optional)
#SMTP_ADMIN_EMAIL: [email protected]
#SMTP_HOST: supabase-mail
#SMTP_PORT: 2500
#SMTP_USER: fake_mail_user
#SMTP_PASS: fake_mail_password
#SMTP_SENDER_NAME: fake_sender
#ENABLE_ANONYMOUS_USERS: false
# infra cluster for proxy, monitor, alert, etc..
infra: { hosts: { 10.10.10.10: { infra_seq: 1 } } }
# etcd cluster for ha postgres
etcd: { hosts: { 10.10.10.10: { etcd_seq: 1 } }, vars: { etcd_cluster: etcd } }
# minio cluster, s3 compatible object storage
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }
# pg-meta, the underlying postgres database for supabase
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta
pg_users:
# supabase roles: anon, authenticated, dashboard_user
- { name: anon ,login: false }
- { name: authenticated ,login: false }
- { name: dashboard_user ,login: false ,replication: true ,createdb: true ,createrole: true }
- { name: service_role ,login: false ,bypassrls: true }
# supabase users: please use the same password
- { name: supabase_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: true ,roles: [ dbrole_admin ] ,superuser: true ,replication: true ,createdb: true ,createrole: true ,bypassrls: true }
- { name: authenticator ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] }
- { name: supabase_auth_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,roles: [ dbrole_admin ] ,createrole: true }
- { name: supabase_storage_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,roles: [ dbrole_admin, authenticated ,anon ,service_role ] ,createrole: true }
- { name: supabase_functions_admin ,password: 'DBUser.Supa' ,pgbouncer: true ,inherit: false ,roles: [ dbrole_admin ] ,createrole: true }
- { name: supabase_replication_admin ,password: 'DBUser.Supa' ,replication: true ,roles: [ dbrole_admin ]}
- { name: supabase_read_only_user ,password: 'DBUser.Supa' ,bypassrls: true ,roles: [ dbrole_readonly, pg_read_all_data ] }
pg_databases:
- name: postgres
baseline: supabase.sql
owner: supabase_admin
comment: supabase postgres database
schemas: [ extensions ,auth ,realtime ,storage ,graphql_public ,supabase_functions ,_analytics ,_realtime ]
extensions:
- { name: pgcrypto ,schema: extensions } # cryptographic functions
- { name: pg_net ,schema: extensions } # async HTTP
- { name: pgjwt ,schema: extensions } # json web token API for postgres
- { name: uuid-ossp ,schema: extensions } # generate universally unique identifiers (UUIDs)
- { name: pgsodium } # pgsodium is a modern cryptography library for Postgres.
- { name: supabase_vault } # Supabase Vault Extension
- { name: pg_graphql } # pg_graphql: GraphQL support
- { name: pg_jsonschema } # pg_jsonschema: Validate json schema
- { name: wrappers } # wrappers: FDW collections
- { name: http } # http: allows web page retrieval inside the database.
- { name: pg_cron } # pg_cron: Job scheduler for PostgreSQL
- { name: timescaledb } # timescaledb: Enables scalable inserts and complex queries for time-series data
- { name: pg_tle } # pg_tle: Trusted Language Extensions for PostgreSQL
- { name: vector } # pgvector: the vector similarity search
- { name: pgmq } # pgmq: A lightweight message queue like AWS SQS and RSMQ
# supabase required extensions
pg_libs: 'timescaledb, plpgsql, plpgsql_check, pg_cron, pg_net, pg_stat_statements, auto_explain, pg_tle, plan_filter'
pg_parameters:
cron.database_name: postgres
pgsodium.enable_event_trigger: off
pg_hba_rules: # supabase hba rules, require access from docker network
- { user: all ,db: postgres ,addr: intra ,auth: pwd ,title: 'allow supabase access from intranet' }
- { user: all ,db: postgres ,addr: 172.17.0.0/16 ,auth: pwd ,title: 'allow access from local docker network' }
node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ] # make a full backup every 1am
#==============================================================#
# Global Parameters
#==============================================================#
vars:
version: v3.4.1 # pigsty version string
admin_ip: 10.10.10.10 # admin node ip address
region: default # upstream mirror region: default|china|europe
node_tune: oltp # node tuning specs: oltp,olap,tiny,crit
pg_conf: oltp.yml # pgsql tuning specs: {oltp,olap,tiny,crit}.yml
docker_enabled: true # enable docker on app group
#docker_registry_mirrors: ["https://docker.1ms.run"] # use mirror in mainland china
proxy_env: # global proxy env when downloading packages & pull docker images
no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*,*.tsinghua.edu.cn"
#http_proxy: 127.0.0.1:12345 # add your proxy env here for downloading packages or pull images
#https_proxy: 127.0.0.1:12345 # usually the proxy is format as http://user:[email protected]
#all_proxy: 127.0.0.1:12345
certbot_email: [email protected] # your email address for applying free let's encrypt ssl certs
infra_portal: # domain names and upstream servers
home : { domain: h.pigsty }
grafana : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
prometheus : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
minio : { domain: m.pigsty ,endpoint: "10.10.10.10:9001", https: true, websocket: true }
blackbox : { endpoint: "${admin_ip}:9115" }
loki : { endpoint: "${admin_ip}:3100" } # expose supa studio UI and API via nginx
supa : # nginx server config for supabase
domain: supa.pigsty # REPLACE WITH YOUR OWN DOMAIN!
endpoint: "10.10.10.10:8000" # supabase service endpoint: IP:PORT
websocket: true # add websocket support
certbot: supa.pigsty # certbot cert name, apply with `make cert`
#----------------------------------#
# Credential: CHANGE THESE PASSWORDS
#----------------------------------#
#grafana_admin_username: admin
grafana_admin_password: pigsty
#pg_admin_username: dbuser_dba
pg_admin_password: DBUser.DBA
#pg_monitor_username: dbuser_monitor
pg_monitor_password: DBUser.Monitor
#pg_replication_username: replicator
pg_replication_password: DBUser.Replicator
#patroni_username: postgres
patroni_password: Patroni.API
#haproxy_admin_username: admin
haproxy_admin_password: pigsty
#minio_access_key: minioadmin
minio_secret_key: minioadmin # minio root secret key, `minioadmin` by default, also change pgbackrest_repo.minio.s3_key_secret
# use minio as supabase file storage, single node single driver mode for demonstration purpose
minio_buckets: [ { name: pgsql }, { name: supa } ]
minio_users:
- { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
- { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }
- { access_key: supabase , secret_key: S3User.Supabase, policy: readwrite }
minio_endpoint: https://sss.pigsty:9000 # explicit overwrite minio endpoint with haproxy port
node_etc_hosts: ["10.10.10.10 sss.pigsty"] # domain name to access minio from all nodes (required)
# use minio as default backup repo for PostgreSQL
pgbackrest_method: minio # pgbackrest repo method: local,minio,[user-defined...]
pgbackrest_repo: # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
local: # default pgbackrest repo with local posix fs
path: /pg/backup # local backup directory, `/pg/backup` by default
retention_full_type: count # retention full backups by count
retention_full: 2 # keep 2, at most 3 full backup when using local fs repo
minio: # optional minio repo for pgbackrest
type: s3 # minio is s3-compatible, so s3 is used
s3_endpoint: sss.pigsty # minio endpoint domain name, `sss.pigsty` by default
s3_region: us-east-1 # minio region, us-east-1 by default, useless for minio
s3_bucket: pgsql # minio bucket name, `pgsql` by default
s3_key: pgbackrest # minio user access key for pgbackrest
s3_key_secret: S3User.Backup # minio user secret key for pgbackrest <------------------ HEY, DID YOU CHANGE THIS?
s3_uri_style: path # use path style uri for minio rather than host style
path: /pgbackrest # minio backup path, default is `/pgbackrest`
storage_port: 9000 # minio port, 9000 by default
storage_ca_file: /etc/pki/ca.crt # minio ca file path, `/etc/pki/ca.crt` by default
block: y # Enable block incremental backup
bundle: y # bundle small files into a single file
bundle_limit: 20MiB # Limit for file bundles, 20MiB for object storage
bundle_size: 128MiB # Target size for file bundles, 128MiB for object storage
cipher_type: aes-256-cbc # enable AES encryption for remote backup repo
cipher_pass: pgBackRest # AES encryption password, default is 'pgBackRest' <----- HEY, DID YOU CHANGE THIS?
retention_full_type: time # retention full backup by time on minio repo
retention_full: 14 # keep full backup for last 14 days
pg_version: 17
repo_extra_packages: [pg17-core ,pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-olap ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl ]
pg_extensions: [ pg17-time ,pg17-gis ,pg17-rag ,pg17-fts ,pg17-feat ,pg17-lang ,pg17-type ,pg17-util ,pg17-func ,pg17-admin ,pg17-stat ,pg17-sec ,pg17-fdw ,pg17-sim ,pg17-etl, pg_mooncake, pg_analytics, pg_parquet ] #,pg17-olap]
For advanced topics, we may need to modify the configuration file to fit our needs.
- Security Enhancement
- Domain Name and HTTPS
- Sending Mail with SMTP
- MinIO or External S3
- True High Availability
Security Enhancement
For security reasons, you should change the default passwords in the pigsty.yml
config file.
grafana_admin_password
:pigsty
, Grafana admin passwordpg_admin_password
:DBUser.DBA
, PGSQL superuser passwordpg_monitor_password
:DBUser.Monitor
, PGSQL monitor user passwordpg_replication_password
:DBUser.Replicator
, PGSQL replication user passwordpatroni_password
:Patroni.API
, Patroni HA Agent Passwordhaproxy_admin_password
:pigsty
, Load balancer admin passwordminio_access_key
:minioadmin
, MinIO root usernameminio_secret_key
:minioadmin
, MinIO root password
Supabase will use PostgreSQL & MinIO as its backend, so also change the following passwords for supabase business users:
pg_users
: password for supabase business users in postgresminio_users
:minioadmin
, MinIO business user’s password
The pgbackrest will take backups and WALs to MinIO, so also change the following passwords reference
pgbackrest_repo
: refer to the
PLEASE check the Supabase Self-Hosting: Generate API Keys to generate supabase credentials:
jwt_secret
: a secret key with at least 40 charactersanon_key
: a jwt token generate for anonymous users, based onjwt_secret
service_role_key
: a jwt token generate for elevated service roles, based onjwt_secret
dashboard_username
: supabase studio web portal username,supabase
by defaultdashboard_password
: supabase studio web portal password,pigsty
by default
If you have chanaged the default password for PostgreSQL and MinIO, you have to update the following parameters as well:
postgres_password
, according topg_users
s3_access_key
ands3_secret_key
, according tominio_users
Domain Name and HTTPS
If you’re using Supabase on your local machine or within a LAN, you can directly connect to Supabase via IP:Port through Kong’s exposed HTTP port 8000.
You can use a locally resolved domain name, but for serious production deployments, we recommend using a real domain name + HTTPS to access Supabase. In this case, you typically need to prepare the following:
- Your server should have a public IP address
- Purchase a domain name, use the DNS resolution services provided by cloud/DNS/CDN providers to point it to your installation node’s public IP (lower alternative: local
/etc/hosts
) - Apply for a certificate, using free HTTPS certificates issued by certificate authorities like Let’s Encrypt for encrypted communication (lower alternative: default self-signed certificates, manually trusted)
You can refer to the certbot tutorial to apply for a free HTTPS certificate. Here we assume your custom domain name is: supa.pigsty.cc
, then you should modify the supa
domain in infra_portal
like this:
all:
vars:
infra_portal:
supa :
domain: supa.pigsty.cc # replace with your own domain!
endpoint: "10.10.10.10:8000"
websocket: true
certbot: supa.pigsty.cc # certificate name, usually the same as the domain name
If the domain name has been resolved to your server’s public IP, then you can automatically complete the certificate application and application in the Pigsty directory by executing the following command:
make cert
In addition to the Pigsty component passwords, you also need to modify the domain name related configurations of Supabase, including:
Configure them to your custom domain name, for example: supa.pigsty.cc
, then apply the configuration again:
./app.yml -t app_config,app_launch
As a lower alternative, you can use a local domain name to access Supabase.
When using a local domain name, you can configure the resolution of supa.pigsty
in the browser’s /etc/hosts
or LAN DNS, pointing it to the public IP address of the installation node.
The Nginx on the Pigsty management node will apply for a self-signed certificate for this domain name (the browser will display Unsecure
), and forward the request to the 8000 port of Kong, which is processed by Supabase.
MinIO or External S3
Pigsty’s self-hosting supabase will use a local SNSD MinIO server, which is used by Supabase itself for object storage, and by PostgreSQL for backups. For production use, you should consider using a HA MNMD MinIO cluster or an external S3 compatible service instead.
We recommend using an external S3 when:
- you just have one single server available, then external s3 gives you a minimal disaster recovery guarantee, with RTO in hours and RPO in MBs.
- you are operating in the cloud, then using S3 directly is recommended rather than wrap expensively EBS with MinIO
The
terraform/spec/aliyun-meta-s3.tf
provides an example of how to provision a single node alone with an S3 bucket.
To use an external S3 compatible service, you’ll have to update two related references in the pigsty.yml
config.
First, let’s update the S3 related configurations in all.children.supa.vars.apps.[supabase].conf
, pointing it to the external S3 compatible service:
# if using s3/minio as file storage
S3_BUCKET: supa
S3_ENDPOINT: https://sss.pigsty:9000
S3_ACCESS_KEY: supabase
S3_SECRET_KEY: S3User.Supabase
S3_FORCE_PATH_STYLE: true
S3_PROTOCOL: https
S3_REGION: stub
MINIO_DOMAIN_IP: 10.10.10.10 # sss.pigsty domain name will resolve to this ip statically
Then, reload the supabase service with the following command:
./app.yml -t app_config,app_launch
You can also use S3 as the backup repository for PostgreSQL, in all.vars.pgbackrest_repo
add a new aliyun
backup repository definition:
all:
vars:
pgbackrest_method: aliyun # pgbackrest backup method: local,minio,[other user-defined repositories...]
pgbackrest_repo: # pgbackrest backup repository: https://pgbackrest.org/configuration.html#section-repository
aliyun: # define a new backup repository aliyun
type: s3 # aliyun oss is s3-compatible object storage
s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
s3_region: oss-cn-beijing
s3_bucket: pigsty-oss
s3_key: xxxxxxxxxxxxxx
s3_key_secret: xxxxxxxx
s3_uri_style: host
path: /pgbackrest
bundle: y
cipher_type: aes-256-cbc
cipher_pass: PG.${pg_cluster} # set a cipher password bound to the cluster name
retention_full_type: time
retention_full: 14
Then, specify the use of the aliyun
backup repository in all.vars.pgbackrest_mehod
, and reset the pgBackrest backup:
./pgsql.yml -t pgbackrest
Pigsty will switch the backup repository to the external object storage.
Sending Mail with SMTP
Some Supabase features require email. For production use, I’d recommend using an external SMTP service. Since self-hosted SMTP servers often result in rejected or spam-flagged emails.
To do this, modify the Supabase configuration and add SMTP credentials:
all:
children:
supa: # supa group
vars: # supa group vars
apps: # supa group app list
supabase: # the supabase app
conf: # the supabase app conf entries
SMTP_HOST: smtpdm.aliyun.com:80
SMTP_PORT: 80
SMTP_USER: [email protected]
SMTP_PASS: your_email_user_password
SMTP_SENDER_NAME: MySupabase
SMTP_ADMIN_EMAIL: [email protected]
ENABLE_ANONYMOUS_USERS: false
And don’t forget to reload the supabase service with ./app.yml -t app_config,app_launch
Backup Strategy
In the supabase template, Pigsty has already used a daily full backup at 01:00 AM, you can refer to the backup/restore to modify the backup strategy.
all:
children:
pg-meta:
hosts: { 10.10.10.10: { pg_seq: 1, pg_role: primary } }
vars:
pg_cluster: pg-meta # daily full backup at 01:00 AM
node_crontab: [ '00 01 * * * postgres /pg/bin/pg-backup full' ]
Then, apply the Crontab configuration to the nodes with the following command:
./node.yml -t node_crontab
More about backup strategy, please refer to Backup Strategy
True High Availability
The default single-node deployment (with external S3) provide a minimal disaster recovery guarantee, with RTO in hours and RPO in MBs.
To achieve RTO < 30s and zero data loss, you need a multi-node high availability cluster with at least 3-nodes.
Which involves high availability for these components:
- ETCD: DCS requires at least three nodes to tolerate one node failure.
- PGSQL: PGSQL synchronous commit mode recommends at least three nodes.
- INFRA: It’s good to have two or three copies of observability stack.
- Supabase itself can also have multiple replicas to achieve high availability.
We recommend you to refer to the trio and safe config to upgrade your cluster to three nodes or more.
In this case, you also need to modify the access points for PostgreSQL and MinIO to use the DNS / L2 VIP / HAProxy HA access points.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.