Configuration

Choose appropriate storage backends and configure JuiceFS filesystem instances

Concepts

JuiceFS is a high-performance POSIX filesystem composed of a metadata engine and data storage. In Pigsty, we use PostgreSQL as the metadata engine, storing filesystem metadata (directory structure, file attributes, etc.) in PostgreSQL, leveraging its HA and backup recovery capabilities.

JUICE module core features:

  • Multi-instance support: Each node can mount multiple JuiceFS filesystem instances
  • PostgreSQL metadata: Leverages PostgreSQL reliability and PITR capability
  • Monitoring integration: Each instance exposes Prometheus metrics port
  • Flexible storage backends: Supports PostgreSQL large objects, MinIO, S3, and more

Identity Parameters

JUICE module configuration is defined via juice_instances dictionary. Each instance requires:

NameTypeDescriptionExample
juice_instancesRequired, Node-levelJuiceFS instance definitions{ jfs: {...} }
pathRequired, Instance-levelMount point path/fs
metaRequired, Instance-levelMetadata engine URLpostgres://u:p@h:5432/db
  • juice_instances: Dictionary format, Key is filesystem name (instance ID), Value is instance config
  • path: Filesystem mount point path, e.g., /fs, /pgfs, /data/shared
  • meta: PostgreSQL metadata engine connection URL

Instance Configuration

Each JuiceFS instance supports these config options:

FieldRequiredDefaultDescription
pathYes-Mount point path
metaYes-Metadata engine URL
dataNo''juicefs format storage backend options
unitNojuicefs-<name>systemd service name
mountNo''Extra mount options
portNo9567Prometheus metrics port (must be unique per node)
ownerNorootMount point directory owner
groupNorootMount point directory group
modeNo0755Mount point directory permissions
stateNocreatecreate to create, absent to remove

Storage Backends

JuiceFS supports multiple data storage backends, configured via the data field for juicefs format command:

PostgreSQL Large Object Storage

Use PostgreSQL as data storage backend, storing file data as large objects:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

This mode unifies data and metadata management, enabling filesystem PITR via PostgreSQL backup recovery.

MinIO Object Storage

Use MinIO as data storage backend:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
    port  : 9567

S3-Compatible Storage

Use AWS S3 or S3-compatible object storage:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage s3 --bucket https://s3.amazonaws.com/my-bucket --access-key AKIAXXXXXXXX --secret-key XXXXXXXXXX
    port  : 9567

Configuration Examples

Single Instance

Simplest single-instance config using PostgreSQL for metadata and data:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          juice_instances:
            jfs:
              path  : /fs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta

Multi-Instance

Multiple filesystems on same node, note unique ports:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          juice_instances:
            pgfs:
              path  : /pgfs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
              port  : 9567
            shared:
              path  : /data/shared
              meta  : postgres://dbuser_meta:[email protected]:5432/shared_meta
              data  : --storage minio --bucket http://10.10.10.10:9000/shared
              port  : 9568    # Must differ from other instances
              owner : postgres
              group : postgres

Multi-Node Shared Filesystem

Multiple nodes mounting the same JuiceFS filesystem for shared storage:

all:
  children:
    app:
      hosts:
        10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
        10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
        10.10.10.13: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }

AI/Coding Sandbox

Complete config for AI-assisted coding with Code-Server, JupyterLab, and JuiceFS:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          code_enabled: true
          code_password: 'Code.Server'
          jupyter_enabled: true
          jupyter_password: 'Jupyter.Lab'
          juice_instances:
            jfs:
              path  : /fs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
      vars:
        code_home: /fs/code
        jupyter_home: /fs/jupyter

Limitations

  • JuiceFS instance port must be unique per node for Prometheus metrics
  • When using PostgreSQL for data storage, file data is stored as large objects, may not suit very large files
  • Filesystem formatting (juicefs format) is one-time; changing storage backend requires reformatting

Last Modified 2026-01-25: v4.0 batch update (65761a0)