OrioleDB is Here! 4x Performance, Zero Bloat, Decoupled Storage
OrioleDB - while the name might make you think of cookies, it’s actually named after the songbird. But whether you call it Cookie DB or Bird DB doesn’t matter - what matters is that this PG storage engine extension + kernel fork is genuinely fascinating, and it’s about to hit prime time.
As Zheap’s successor, I’ve been watching OrioleDB for quite a while. It has three major selling points: performance, operability, and cloud-native capabilities. Let me give you a quick tour of this PG kernel newcomer, along with some recent work I’ve done to help you get it up and running.
Extreme Performance: 4x Throughput
While hardware performance is overkill for most OLTP databases these days, hitting the single-node write throughput ceiling isn’t exactly rare - it’s usually what drives people to shard their databases.
OrioleDB aims to solve this. According to their homepage, they achieve 4x PostgreSQL’s read/write throughput - a pretty wild number. A 40% performance boost wouldn’t justify adopting a new storage engine, but 400%? Now that’s an interesting proposition.
Plus, OrioleDB claims to significantly reduce resource consumption in OLTP scenarios, notably lowering disk IOPS usage.
The secret sauce includes several optimizations over PG heap tables: ditching FS Cache, direct memory-to-storage page linking, lock-free memory page access, MVCC via UNDO logs/rollback segments instead of PG’s REDO, and row-level WAL that’s easier to parallelize.
Haven’t benchmarked it myself yet, but it’s tempting. Might grab a server and give it a spin soon.
Zero Headaches: Simplified Ops
PostgreSQL’s most notorious pain points are XID Wraparound and table bloat - both stemming from its MVCC design.
PostgreSQL’s default storage engine was designed with “infinite time travel” in mind, using an append-only MVCC approach - DELETEs are marked-for-deletion, and UPDATEs are delete-mark-plus-new-version.
While this design has perks - non-blocking reads/writes, instant rollbacks regardless of transaction size, and minimal replication lag - it’s given PostgreSQL users their fair share of headaches. Even with modern hardware and automatic vacuum, a high-standard PostgreSQL setup still needs to keep an eye on bloat and garbage collection.
OrioleDB tackles this with a new storage engine - think Oracle/MySQL-style approach, inheriting both their pros and cons. Using new MVCC practices, OrioleDB tables say goodbye to bloat and XID wraparound concerns.
Of course, there’s no free lunch - you inherit the downsides too: large transaction issues, slower rollbacks, and analytical performance trade-offs. But it excels at what it aims for: maximum OLTP CRUD performance.
Most importantly, it’s a PG extension - an optional storage engine that plays nice with PG’s native heap tables. You can mix and match based on your needs, letting your extreme OLTP tables shine where it counts.
-- Enable OrioleDB extension (Pigsty has it ready)
CREATE EXTENSION orioledb;
CREATE TABLE blog_post
(
id int8 NOT NULL,
title text NOT NULL,
body text NOT NULL,
PRIMARY KEY(id)
) USING orioledb; -- Use OrioleDB storage engine
Using OrioleDB is dead simple - just add the
USING
keyword when creating tables.
Currently, OrioleDB is a storage engine extension requiring a patched PG kernel, as some necessary storage engine APIs haven’t landed in PG core yet. If all goes well, PostgreSQL 18 will include these patches, eliminating the need for kernel modifications.
Name | Link | Version | |
---|---|---|---|
✅ | Add missing inequality searches to rbtree | Link | PostgreSQL 16 |
✅ | Document the ability to specify TableAM for pgbench | Link | PostgreSQL 16 |
✅ | Remove Tuplesortstate.copytup function | Link | PostgreSQL 16 |
✅ | Add new Tuplesortstate.removeabbrev function | Link | PostgreSQL 16 |
✅ | Put abbreviation logic into puttuple_common() | Link | PostgreSQL 16 |
✅ | Move memory management away from writetup() and tuplesort_put*() | Link | PostgreSQL 16 |
✅ | Split TuplesortPublic from Tuplesortstate | Link | PostgreSQL 16 |
✅ | Split tuplesortvariants.c from tuplesort.c | Link | PostgreSQL 16 |
✅ | Fix typo in comment for writetuple() function | Link | PostgreSQL 16 |
✅ | Support for custom slots in the custom executor nodes | Link | PostgreSQL 16 |
✉️ | Allow table AM to store complex data structures in rd_amcache | Link | PostgreSQL 18 |
✉️ | Allow table AM tuple_insert() method to return the different slot | Link | PostgreSQL 18 |
✉️ | Add TupleTableSlotOps.is_current_xact_tuple() method | Link | PostgreSQL 18 |
✉️ | Allow locking updated tuples in tuple_update() and tuple_delete() | Link | PostgreSQL 18 |
✉️ | Add EvalPlanQual delete returning isolation test | Link | PostgreSQL 18 |
✉️ | Generalize relation analyze in table AM interface | Link | PostgreSQL 18 |
✉️ | Custom reloptions for table AM | Link | PostgreSQL 18 |
✉️ | Let table AM insertion methods control index insertion | Link | PostgreSQL 18 |
I’ve prepared oriolepg_17
(patched PG) and orioledb_17
(extension) on EL, plus a ready-to-use config template for instant OrioleDB deployment.
Cloud-Native Storage
“Cloud-native” is an overused term that nobody quite understands. But for databases, it usually means one thing: storing data in object storage.
OrioleDB recently pivoted their slogan from “High-performance OLTP storage engine” to “Cloud-native storage engine”. I get why - Supabase acquired OrioleDB, and the sugar daddy’s needs come first.
As a “cloud database provider”, offloading cold data to “cheap” object storage instead of “premium” EBS block storage is quite profitable. Plus, it makes databases stateless “cattle” that can be freely scaled in K8s. Their motivation is crystal clear.
So I’m pretty excited that OrioleDB not only offers a new storage engine but also supports object storage. While PG-over-S3 projects exist, this is the first mature, mainline-compatible, open-source solution.
So, How Do I Try It?
OrioleDB sounds great - solving key PG issues, (future) mainline compatibility, open-source, well-funded, and led by Alexander Korotkov who has serious PG community cred.
Obviously, OrioleDB isn’t “production-ready” yet. I’ve watched it from Alpha1 three years ago to Beta10 now, each release making me more antsy. But I noticed it’s now in Supabase’s postgres mainline - release can’t be far off.
So when OrioleDB dropped beta10 on April 1st, I decided to package it. Fresh off building OpenHalo RPMs and a MySQL-compatible PG kernel, what’s one more? I created RPM packages for the patched PG kernel (oriolepg_17) and extension (orioledb_17), available for EL8/EL9 on x86/ARM64.
Better yet, I added native OrioleDB support to Pigsty, meaning OrioleDB gets the full PG ecosystem - Patroni for HA, pgBackRest for backups, pg_exporter for monitoring, pgbouncer for connection pooling, all wrapped up in a one-click production-grade RDS service:
This Qingming Festival, I released Pigsty v3.4.1 with built-in OrioleDB and OpenHalo kernel support. Spinning up an OrioleDB cluster is as simple as a regular PostgreSQL cluster:
all:
children:
pg-orio:
vars:
pg_databases:
- {name: meta ,extensions: [orioledb]}
vars:
pg_mode: oriole
pg_version: 17
pg_packages: [ orioledb, pgsql-common ]
pg_libs: 'orioledb.so, pg_stat_statements, auto_explain'
repo_extra_packages: [ orioledb ]
More Kernel Tricks
Of course, OrioleDB isn’t the only PG fork we support. You can also use:
- Microsoft SQL Server-compatible Babelfish (by AWS)
- Oracle-compatible IvorySQL (by HighGo)
- MySQL-compatible openHalo (by EsgynDB)
- Aurora RAC-flavored PolarDB (by Alibaba Cloud)
- Officially certified Oracle-compatible PolarDB O 2.0
- FerretDB + Microsoft’s DocumentDB to emulate MongoDB
- One-click local Supabase (OrioleDB’s parent!) deployment using Pigsty templates
Plus, my friend Yurii, Omnigres founder, is adding ETCD protocol support to PostgreSQL. Soon, you might be able to use PG as a better-performing, more reliable etcd for Kubernetes/Patroni.
Best of all, everything’s open-source and ready to roll in Pigsty, free of charge. So if you’re curious about OrioleDB, grab a server and give it a shot - 10-minute setup, one command. Let’s see if it lives up to the hype.