This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Module: JUICE

JuiceFS distributed filesystem using PostgreSQL as metadata engine, with PITR-capable shared storage.

JuiceFS is a high-performance POSIX-compatible distributed filesystem that can use PostgreSQL as its metadata engine.

Pigsty’s JUICE module provides complete JuiceFS deployment and management, with multi-instance support, automated installation, monitoring integration, and filesystem PITR via PostgreSQL backup recovery.


Features

  • PostgreSQL Metadata Engine: Uses PostgreSQL for filesystem metadata storage, with HA and PITR capability
  • Flexible Data Storage: Supports PostgreSQL Large Object, MinIO, S3 and other storage backends
  • Multi-Instance Support: Single node can mount multiple independent JuiceFS filesystems
  • PITR Capability: Leverage PostgreSQL backup recovery for filesystem point-in-time recovery
  • Monitoring Integration: Auto-integrated with VictoriaMetrics monitoring system

Config Example

Typical JuiceFS configuration using PostgreSQL as metadata and data storage:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

Use Cases

JUICE module is suitable for:

  • AI Coding Sandbox: Persistent storage for Code-Server, JupyterLab
  • Shared Storage: Multi-node mount same filesystem for file sharing
  • Data Lake Storage: Large capacity storage for data analysis, ML tasks
  • Backup Archive: Low-cost data archiving using object storage backends

Documentation

  • Configuration: Configure JuiceFS instances, storage backends and mount options
  • Parameters: JUICE module parameter reference
  • Playbook: Deployment and management playbook guide
  • Administration: JuiceFS management SOPs, including scaling, PITR recovery
  • Monitoring: JuiceFS metrics and Grafana dashboards
  • FAQ: Common questions about JUICE module

1 - Configuration

Choose appropriate storage backends and configure JuiceFS filesystem instances

Concepts

JuiceFS is a high-performance POSIX filesystem composed of a metadata engine and data storage. In Pigsty, we use PostgreSQL as the metadata engine, storing filesystem metadata (directory structure, file attributes, etc.) in PostgreSQL, leveraging its HA and backup recovery capabilities.

JUICE module core features:

  • Multi-instance support: Each node can mount multiple JuiceFS filesystem instances
  • PostgreSQL metadata: Leverages PostgreSQL reliability and PITR capability
  • Monitoring integration: Each instance exposes Prometheus metrics port
  • Flexible storage backends: Supports PostgreSQL large objects, MinIO, S3, and more

Identity Parameters

JUICE module configuration is defined via juice_instances dictionary. Each instance requires:

NameTypeDescriptionExample
juice_instancesRequired, Node-levelJuiceFS instance definitions{ jfs: {...} }
pathRequired, Instance-levelMount point path/fs
metaRequired, Instance-levelMetadata engine URLpostgres://u:p@h:5432/db
  • juice_instances: Dictionary format, Key is filesystem name (instance ID), Value is instance config
  • path: Filesystem mount point path, e.g., /fs, /pgfs, /data/shared
  • meta: PostgreSQL metadata engine connection URL

Instance Configuration

Each JuiceFS instance supports these config options:

FieldRequiredDefaultDescription
pathYes-Mount point path
metaYes-Metadata engine URL
dataNo''juicefs format storage backend options
unitNojuicefs-<name>systemd service name
mountNo''Extra mount options
portNo9567Prometheus metrics port (must be unique per node)
ownerNorootMount point directory owner
groupNorootMount point directory group
modeNo0755Mount point directory permissions
stateNocreatecreate to create, absent to remove

Storage Backends

JuiceFS supports multiple data storage backends, configured via the data field for juicefs format command:

PostgreSQL Large Object Storage

Use PostgreSQL as data storage backend, storing file data as large objects:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

This mode unifies data and metadata management, enabling filesystem PITR via PostgreSQL backup recovery.

MinIO Object Storage

Use MinIO as data storage backend:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
    port  : 9567

S3-Compatible Storage

Use AWS S3 or S3-compatible object storage:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage s3 --bucket https://s3.amazonaws.com/my-bucket --access-key AKIAXXXXXXXX --secret-key XXXXXXXXXX
    port  : 9567

Configuration Examples

Single Instance

Simplest single-instance config using PostgreSQL for metadata and data:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          juice_instances:
            jfs:
              path  : /fs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta

Multi-Instance

Multiple filesystems on same node, note unique ports:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          juice_instances:
            pgfs:
              path  : /pgfs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
              port  : 9567
            shared:
              path  : /data/shared
              meta  : postgres://dbuser_meta:[email protected]:5432/shared_meta
              data  : --storage minio --bucket http://10.10.10.10:9000/shared
              port  : 9568    # Must differ from other instances
              owner : postgres
              group : postgres

Multi-Node Shared Filesystem

Multiple nodes mounting the same JuiceFS filesystem for shared storage:

all:
  children:
    app:
      hosts:
        10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
        10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
        10.10.10.13: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }

AI/Coding Sandbox

Complete config for AI-assisted coding with Code-Server, JupyterLab, and JuiceFS:

all:
  children:
    infra:
      hosts:
        10.10.10.10:
          code_enabled: true
          code_password: 'Code.Server'
          jupyter_enabled: true
          jupyter_password: 'Jupyter.Lab'
          juice_instances:
            jfs:
              path  : /fs
              meta  : postgres://dbuser_meta:[email protected]:5432/meta
              data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
      vars:
        code_home: /fs/code
        jupyter_home: /fs/jupyter

Limitations

  • JuiceFS instance port must be unique per node for Prometheus metrics
  • When using PostgreSQL for data storage, file data is stored as large objects, may not suit very large files
  • Filesystem formatting (juicefs format) is one-time; changing storage backend requires reformatting

2 - Parameters

JUICE module provides 2 global parameters for JuiceFS deployment and configuration

JUICE module parameter list, 2 parameters total:


Parameter Overview

ParameterTypeLevelDescription
juice_cachepathCJuiceFS shared cache directory
juice_instancesdictIJuiceFS instance definitions, required

Default Parameters

JUICE: 2 parameters, defined in roles/juice/defaults/main.yml

#-----------------------------------------------------------------
# JUICE
#-----------------------------------------------------------------
juice_cache: /data/juice              # JuiceFS shared cache directory
juice_instances: {}                   # JuiceFS instance definitions

JUICE

This section contains parameters for the juice role, used by the juice.yml playbook.

juice_cache

Parameter: juice_cache, Type: path, Level: C

Shared local cache directory for all JuiceFS instances, defaults to /data/juice.

JuiceFS isolates cache data by filesystem UUID under this directory, accelerating frequently accessed file reads.

juice_cache: /data/juice

juice_instances

Parameter: juice_instances, Type: dict, Level: I

JuiceFS instance definition dictionary, required parameter, must be explicitly configured at Host level.

Content is JSON/YAML dictionary format, Key is filesystem name (instance ID), Value is instance config object.

juice_instances:
  jfs:                                          # Filesystem name
    path  : /fs                                 # [Required] Mount point path
    meta  : postgres://u:p@h:5432/db            # [Required] Metadata engine URL
    data  : --storage postgres --bucket ...    # Storage backend options
    unit  : juicefs-jfs                         # systemd service name
    mount : ''                                  # Extra mount options
    port  : 9567                                # Metrics port (must be unique per node)
    owner : root                                # Mount point owner
    group : root                                # Mount point group
    mode  : '0755'                              # Mount point permissions
    state : create                              # create | absent

Instance config field descriptions:

FieldRequiredDefaultDescription
pathYes-Mount point path, e.g., /fs, /pgfs
metaYes-Metadata engine URL, typically PostgreSQL connection string
dataNo''juicefs format storage backend params
unitNojuicefs-<name>systemd service unit name
mountNo''juicefs mount extra params
portNo9567Prometheus metrics port, must be unique for multi-instance
ownerNorootMount point directory owner
groupNorootMount point directory group
modeNo0755Mount point directory permissions
stateNocreatecreate to create, absent to remove

Config Examples:

Using PostgreSQL for metadata and data storage:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567

Using MinIO for data storage:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
    port  : 9567

Multi-instance config (note unique ports):

juice_instances:
  pgfs:
    path  : /pgfs
    meta  : postgres://dbuser_meta:[email protected]:5432/meta
    data  : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
    port  : 9567
  shared:
    path  : /shared
    meta  : postgres://dbuser_meta:[email protected]:5432/shared
    port  : 9568    # Must differ from pgfs
    owner : postgres
    group : postgres

3 - Playbooks

Use ansible playbooks to manage JuiceFS filesystems, common commands reference.

JUICE module provides one playbook for deploying and removing JuiceFS filesystem instances:


juice.yml

The juice.yml playbook for JuiceFS deployment contains these subtasks:

juice_id        : Validate config, check port conflicts
juice_install   : Install juicefs package
juice_cache     : Create shared cache directory
juice_clean     : Clean instances (state=absent)
juice_instance  : Create instances (state=create)
  - juice_init  : Format filesystem
  - juice_dir   : Create mount point directory
  - juice_config: Render config files (triggers restart)
  - juice_launch: Start systemd service
juice_register  : Register to monitoring system

Operation Levels

juice.yml supports two operation levels:

LevelLimit ParameterDescription
Node-l <ip>Deploy all JuiceFS instances on specified node
Instance-l <ip> -e fsname=<name>Deploy only single instance on specified node

Node-Level Operations

Deploy all JuiceFS instances defined on specified node:

./juice.yml -l 10.10.10.10        # Deploy all instances on this node
./juice.yml -l 10.10.10.11        # Deploy on another node

Node-level operations will:

  • Install JuiceFS package
  • Create shared cache directory
  • Format and mount all defined filesystem instances
  • Register all instances to monitoring system

Instance-Level Operations

Specify single instance via -e fsname=<name> parameter:

# Deploy only instance named jfs on 10.10.10.10
./juice.yml -l 10.10.10.10 -e fsname=jfs

# Deploy only instance named shared on 10.10.10.11
./juice.yml -l 10.10.10.11 -e fsname=shared

Instance-level operations are useful for:

  • Adding new filesystem instances to existing nodes
  • Redeploying single failed instance
  • Updating single instance configuration

Common Tags

Use -t <tag> to selectively execute tasks:

# Only install package, don't start service
./juice.yml -l 10.10.10.10 -t juice_install

# Only update config and restart instances
./juice.yml -l 10.10.10.10 -t juice_config

# Only update monitoring registration
./juice.yml -l 10.10.10.10 -t juice_register

# Remove instances (requires state: absent in config)
./juice.yml -l 10.10.10.10 -t juice_clean

Idempotency

juice.yml is idempotent, safe to run repeatedly:

  • juice_init (format) only executes actual formatting when filesystem doesn’t exist
  • Repeated runs overwrite existing config files
  • Config changes trigger restart of corresponding systemd services
  • Suitable for batch updates after config changes

Tip: To only update config without restarting all instances, use -t juice_config to render config only.


Removing Instances

To remove JuiceFS instances, two steps are needed:

  1. Set instance’s state to absent in config
  2. Execute playbook’s juice_clean task
# Step 1: Modify config
juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    state : absent    # Mark for removal
# Step 2: Execute removal
./juice.yml -l 10.10.10.10 -t juice_clean

# Or remove only specified instance
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean

Removal operations will:

  • Stop corresponding systemd service
  • Execute umount -l lazy unmount
  • Delete systemd service files
  • Delete environment config files
  • Reload systemd daemon

Note: Removal does not delete metadata and file data in PostgreSQL. For complete cleanup, manually delete the corresponding database.


Quick Reference

Deployment Commands

# Deploy all instances on node
./juice.yml -l <ip>

# Deploy single instance
./juice.yml -l <ip> -e fsname=<name>

# Update config and restart
./juice.yml -l <ip> -t juice_config

# Update only single instance config
./juice.yml -l <ip> -e fsname=<name> -t juice_config

Removal Commands

# Remove all instances marked absent on node
./juice.yml -l <ip> -t juice_clean

# Remove single instance
./juice.yml -l <ip> -e fsname=<name> -t juice_clean

Task Tag Reference

TagDescription
juice_idValidate config and port conflicts
juice_installInstall juicefs package
juice_cacheCreate cache directory
juice_cleanRemove instances (state=absent)
juice_instanceCreate instances (umbrella tag)
juice_initFormat filesystem
juice_dirCreate mount point directory
juice_configRender config files
juice_launchStart systemd service
juice_registerRegister to VictoriaMetrics

4 - Administration

JuiceFS filesystem management SOP - create, remove, expand, and troubleshoot

Common JuiceFS management task SOPs:

Basic Operations

Scaling & Maintenance

Troubleshooting

For more issues, see FAQ: JUICE.


Initialize JuiceFS

Use juice.yml playbook to initialize JuiceFS instances:

# Initialize all JuiceFS instances on node
./juice.yml -l 10.10.10.10

# Initialize specific instance
./juice.yml -l 10.10.10.10 -e fsname=jfs

Initialization flow:

  1. Install juicefs package
  2. Create shared cache directory /data/juice
  3. Execute juicefs format to format filesystem
  4. Create mount point directory and set permissions
  5. Render systemd service config
  6. Start service and wait for port ready
  7. Register to VictoriaMetrics monitoring

Remove JuiceFS

Removing JuiceFS instances requires two steps:

# Step 1: Set state to absent in config
# juice_instances:
#   jfs:
#     path: /fs
#     meta: postgres://...
#     state: absent

# Step 2: Execute removal
./juice.yml -l 10.10.10.10 -t juice_clean

# Or remove specific instance
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean

Removal operations will:

  • Stop systemd service
  • Unmount filesystem (lazy umount)
  • Delete service config files
  • Reload systemd

Note: Removal does not delete data in PostgreSQL. For complete cleanup, manually handle the database.


Reconfigure JuiceFS

Partially execute playbook to reconfigure JuiceFS instances:

./juice.yml -l 10.10.10.10 -t juice_config

Config changes trigger service restart. To render config without restart, manually manage systemd service.


Use JuiceFS Client

Once mounted, JuiceFS is a standard POSIX filesystem:

# Check mount status
df -h /fs

# Check filesystem info
juicefs status /fs

# Check filesystem stats
juicefs info /fs

# List active sessions
juicefs status postgres://dbuser_meta:[email protected]:5432/meta

Common Commands

# Check cache usage
juicefs warmup /fs/some/path

# Warm up cache
juicefs warmup -p 4 /fs/some/path

# Clean local cache
juicefs gc postgres://... --delete

# Export filesystem metadata (backup)
juicefs dump postgres://... > metadata.json

# Import metadata (restore)
juicefs load postgres://... < metadata.json

Add New Instance

Add new JuiceFS instance to node:

# Add new instance in inventory
juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    port  : 9567
  newfs:                        # New instance
    path  : /newfs
    meta  : postgres://...
    port  : 9568                # Port must be unique
# Deploy new instance
./juice.yml -l 10.10.10.10 -e fsname=newfs

Multi-Node Shared Mount

JuiceFS supports multi-node mounting of same filesystem for shared storage:

# Multiple nodes configure same metadata URL
app:
  hosts:
    10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
    10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
    10.10.10.13: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
# Deploy to all nodes
./juice.yml -l app

Note: First-time formatting only needs to run on one node; other nodes automatically skip formatting.


PITR Filesystem Recovery

When using PostgreSQL for metadata and data storage, leverage PostgreSQL PITR to recover filesystem to any point in time:

# 1. Stop all JuiceFS services on all nodes
systemctl stop juicefs-jfs

# 2. Use pgBackRest to restore PostgreSQL to target time
pb restore --stanza=meta --type=time --target="2024-01-15 10:30:00"

# 3. Restart PostgreSQL primary
systemctl start postgresql

# 4. Restart all JuiceFS services on all nodes
systemctl start juicefs-jfs

This enables filesystem recovery to any moment within the backup time range.


Common Issue Diagnosis

Mount Failure Troubleshooting

# Check systemd service status
systemctl status juicefs-jfs

# View service logs
journalctl -u juicefs-jfs -f

# Check mount point
mountpoint /fs

# Manual mount test
juicefs mount postgres://... /fs --foreground

Connection Issue Troubleshooting

# Test metadata engine connection
psql "postgres://dbuser_meta:[email protected]:5432/meta" -c "SELECT 1"

# Check port listening
ss -tlnp | grep 9567

# Test metrics port
curl http://localhost:9567/metrics

Filesystem Issues

# Check filesystem status
juicefs status /fs

# Check filesystem consistency
juicefs fsck postgres://...

# View active sessions
juicefs status postgres://... --session

Performance Tuning

Cache Optimization

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    mount : --cache-size 102400 --prefetch 3    # 100GB cache, prefetch 3 blocks

Concurrency Optimization

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    mount : --max-uploads 50 --max-deletes 10   # Concurrent upload/delete count

Memory Optimization

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    mount : --buffer-size 300 --open-cache 3600 # Buffer size, open file cache time

Key Metrics to Monitor

Monitor JuiceFS performance via Prometheus metrics:

  • juicefs_object_request_durations_histogram_seconds: Object storage request latency
  • juicefs_blockcache_hits/misses: Cache hit rate
  • juicefs_fuse_*: FUSE operation stats
  • juicefs_meta_ops_durations_histogram_seconds: Metadata operation latency

5 - Monitoring

JuiceFS filesystem monitoring metrics and Grafana dashboards

Each JuiceFS instance exposes Prometheus-format metrics on configured port (default 9567).


Monitoring Architecture

JuiceFS Instance (port: 9567)
    ↓ /metrics
VictoriaMetrics (scrape)
    ↓
Grafana Dashboard

Pigsty automatically registers JuiceFS instances to VictoriaMetrics, target file located at:

/infra/targets/juice/<hostname>.yml

Key Metrics

Object Storage Metrics

MetricTypeDescription
juicefs_object_request_durations_histogram_secondshistogramObject storage request latency distribution
juicefs_object_request_data_bytescounterObject storage data transfer volume
juicefs_object_request_errorscounterObject storage request error count

Cache Metrics

MetricTypeDescription
juicefs_blockcache_hitscounterBlock cache hit count
juicefs_blockcache_missescounterBlock cache miss count
juicefs_blockcache_writescounterBlock cache write count
juicefs_blockcache_dropscounterBlock cache drop count
juicefs_blockcache_evictionscounterBlock cache eviction count
juicefs_blockcache_hit_bytescounterCache hit bytes
juicefs_blockcache_miss_bytescounterCache miss bytes

Metadata Metrics

MetricTypeDescription
juicefs_meta_ops_durations_histogram_secondshistogramMetadata operation latency distribution
juicefs_transaction_durations_histogram_secondshistogramTransaction latency distribution
juicefs_transaction_restartcounterTransaction retry count

FUSE Operation Metrics

MetricTypeDescription
juicefs_fuse_ops_durations_histogram_secondshistogramFUSE operation latency distribution
juicefs_fuse_read_size_byteshistogramRead operation size distribution
juicefs_fuse_written_size_byteshistogramWrite operation size distribution

Filesystem Metrics

MetricTypeDescription
juicefs_used_spacegaugeUsed space (bytes)
juicefs_used_inodesgaugeUsed inodes

Common PromQL

Cache Hit Rate

rate(juicefs_blockcache_hits[5m]) /
(rate(juicefs_blockcache_hits[5m]) + rate(juicefs_blockcache_misses[5m]))

Object Storage P99 Latency

histogram_quantile(0.99, rate(juicefs_object_request_durations_histogram_seconds_bucket[5m]))

Metadata Operation P99 Latency

histogram_quantile(0.99, rate(juicefs_meta_ops_durations_histogram_seconds_bucket[5m]))

Read/Write Throughput

# Read throughput
rate(juicefs_blockcache_hit_bytes[5m]) + rate(juicefs_blockcache_miss_bytes[5m])

# Write throughput
rate(juicefs_fuse_written_size_bytes_sum[5m])

Metrics Scrape Config

JuiceFS instance VictoriaMetrics target file format:

# /infra/targets/juice/<hostname>.yml
- labels: { ip: 10.10.10.10, ins: "node-jfs", cls: "jfs" }
  targets: [ 10.10.10.10:9567 ]

To manually re-register:

./juice.yml -l <ip> -t juice_register

6 - FAQ

Frequently asked questions about the JUICE module

Port conflict - what to do?

Multiple JuiceFS instances on the same node must configure different port values. If you encounter port conflict error:

juice_instances have port conflicts: [9567, 9567]

Assign unique ports to each instance in config:

juice_instances:
  fs1:
    path: /fs1
    meta: postgres://...
    port: 9567
  fs2:
    path: /fs2
    meta: postgres://...
    port: 9568    # Must be different

How to add new instance?

  1. Add new instance definition in config
  2. Execute playbook specifying new instance name
./juice.yml -l 10.10.10.10 -e fsname=newfs

How to remove instance?

  1. Set instance’s state to absent in config
  2. Execute juice_clean task
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean

Where is filesystem data stored?

Depends on data parameter config:

  • PostgreSQL Large Objects: Data stored in PostgreSQL’s pg_largeobject table
  • MinIO/S3: Data stored in specified bucket in object storage

Metadata is always stored in the PostgreSQL database specified by meta parameter.


What storage backends are supported?

JuiceFS supports multiple storage backends. Common ones in Pigsty:

  • postgres: PostgreSQL large object storage
  • minio: MinIO object storage
  • s3: AWS S3 or S3-compatible storage

See JuiceFS official docs for full list.


Can I mount same filesystem on multiple nodes?

Yes. Just configure the same meta URL on multiple nodes; JuiceFS handles concurrent access automatically.

First-time formatting only needs to run on one node; other nodes automatically skip formatting.


How to use PITR to recover filesystem?

When using PostgreSQL for metadata and data storage:

  1. Stop all JuiceFS services
  2. Use pgBackRest to restore PostgreSQL to target point in time
  3. Restart PostgreSQL and JuiceFS services

See Administration: PITR Filesystem Recovery for detailed steps.


Can cache directory be customized?

Yes, via juice_cache parameter:

juice_cache: /data/juice    # Default
# or
juice_cache: /ssd/juice     # Use SSD for cache

How to configure mount options?

Pass extra juicefs mount parameters via instance’s mount field:

juice_instances:
  jfs:
    path  : /fs
    meta  : postgres://...
    mount : --cache-size 102400 --prefetch 3

Common options:

OptionDescription
--cache-sizeLocal cache size (MB)
--prefetchPrefetch block count
--buffer-sizeRead/write buffer size (MB)
--max-uploadsMax concurrent uploads
--open-cacheOpen file cache time (seconds)