Module: JUICE
JuiceFS distributed filesystem using PostgreSQL as metadata engine, with PITR-capable shared storage.
JuiceFS is a high-performance POSIX-compatible distributed filesystem that can use PostgreSQL as its metadata engine.
Pigsty’s JUICE module provides complete JuiceFS deployment and management, with multi-instance support, automated installation, monitoring integration, and filesystem PITR via PostgreSQL backup recovery.
Features
- PostgreSQL Metadata Engine: Uses PostgreSQL for filesystem metadata storage, with HA and PITR capability
- Flexible Data Storage: Supports PostgreSQL Large Object, MinIO, S3 and other storage backends
- Multi-Instance Support: Single node can mount multiple independent JuiceFS filesystems
- PITR Capability: Leverage PostgreSQL backup recovery for filesystem point-in-time recovery
- Monitoring Integration: Auto-integrated with VictoriaMetrics monitoring system
Config Example
Typical JuiceFS configuration using PostgreSQL as metadata and data storage:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port : 9567
Use Cases
JUICE module is suitable for:
- AI Coding Sandbox: Persistent storage for Code-Server, JupyterLab
- Shared Storage: Multi-node mount same filesystem for file sharing
- Data Lake Storage: Large capacity storage for data analysis, ML tasks
- Backup Archive: Low-cost data archiving using object storage backends
Documentation
- Configuration: Configure JuiceFS instances, storage backends and mount options
- Parameters: JUICE module parameter reference
- Playbook: Deployment and management playbook guide
- Administration: JuiceFS management SOPs, including scaling, PITR recovery
- Monitoring: JuiceFS metrics and Grafana dashboards
- FAQ: Common questions about JUICE module
1 - Configuration
Choose appropriate storage backends and configure JuiceFS filesystem instances
Concepts
JuiceFS is a high-performance POSIX filesystem composed of a metadata engine and data storage. In Pigsty, we use PostgreSQL as the metadata engine, storing filesystem metadata (directory structure, file attributes, etc.) in PostgreSQL, leveraging its HA and backup recovery capabilities.
JUICE module core features:
- Multi-instance support: Each node can mount multiple JuiceFS filesystem instances
- PostgreSQL metadata: Leverages PostgreSQL reliability and PITR capability
- Monitoring integration: Each instance exposes Prometheus metrics port
- Flexible storage backends: Supports PostgreSQL large objects, MinIO, S3, and more
Identity Parameters
JUICE module configuration is defined via juice_instances dictionary. Each instance requires:
| Name | Type | Description | Example |
|---|
juice_instances | Required, Node-level | JuiceFS instance definitions | { jfs: {...} } |
path | Required, Instance-level | Mount point path | /fs |
meta | Required, Instance-level | Metadata engine URL | postgres://u:p@h:5432/db |
juice_instances: Dictionary format, Key is filesystem name (instance ID), Value is instance configpath: Filesystem mount point path, e.g., /fs, /pgfs, /data/sharedmeta: PostgreSQL metadata engine connection URL
Instance Configuration
Each JuiceFS instance supports these config options:
| Field | Required | Default | Description |
|---|
path | Yes | - | Mount point path |
meta | Yes | - | Metadata engine URL |
data | No | '' | juicefs format storage backend options |
unit | No | juicefs-<name> | systemd service name |
mount | No | '' | Extra mount options |
port | No | 9567 | Prometheus metrics port (must be unique per node) |
owner | No | root | Mount point directory owner |
group | No | root | Mount point directory group |
mode | No | 0755 | Mount point directory permissions |
state | No | create | create to create, absent to remove |
Port Conflict Detection
Multiple JuiceFS instances on the same node must use different port values. The playbook will detect port conflicts and fail before execution.
Storage Backends
JuiceFS supports multiple data storage backends, configured via the data field for juicefs format command:
PostgreSQL Large Object Storage
Use PostgreSQL as data storage backend, storing file data as large objects:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port : 9567
This mode unifies data and metadata management, enabling filesystem PITR via PostgreSQL backup recovery.
MinIO Object Storage
Use MinIO as data storage backend:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
port : 9567
S3-Compatible Storage
Use AWS S3 or S3-compatible object storage:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage s3 --bucket https://s3.amazonaws.com/my-bucket --access-key AKIAXXXXXXXX --secret-key XXXXXXXXXX
port : 9567
Configuration Examples
Single Instance
Simplest single-instance config using PostgreSQL for metadata and data:
all:
children:
infra:
hosts:
10.10.10.10:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
Multi-Instance
Multiple filesystems on same node, note unique ports:
all:
children:
infra:
hosts:
10.10.10.10:
juice_instances:
pgfs:
path : /pgfs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port : 9567
shared:
path : /data/shared
meta : postgres://dbuser_meta:[email protected]:5432/shared_meta
data : --storage minio --bucket http://10.10.10.10:9000/shared
port : 9568 # Must differ from other instances
owner : postgres
group : postgres
Multi-Node Shared Filesystem
Multiple nodes mounting the same JuiceFS filesystem for shared storage:
all:
children:
app:
hosts:
10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
10.10.10.13: { juice_instances: { shared: { path: /shared, meta: "postgres://...", port: 9567 } } }
AI/Coding Sandbox
Complete config for AI-assisted coding with Code-Server, JupyterLab, and JuiceFS:
all:
children:
infra:
hosts:
10.10.10.10:
code_enabled: true
code_password: 'Code.Server'
jupyter_enabled: true
jupyter_password: 'Jupyter.Lab'
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
vars:
code_home: /fs/code
jupyter_home: /fs/jupyter
Limitations
- JuiceFS instance
port must be unique per node for Prometheus metrics - When using PostgreSQL for data storage, file data is stored as large objects, may not suit very large files
- Filesystem formatting (
juicefs format) is one-time; changing storage backend requires reformatting
2 - Parameters
JUICE module provides 2 global parameters for JuiceFS deployment and configuration
JUICE module parameter list, 2 parameters total:
Parameter Overview
| Parameter | Type | Level | Description |
|---|
juice_cache | path | C | JuiceFS shared cache directory |
juice_instances | dict | I | JuiceFS instance definitions, required |
Default Parameters
JUICE: 2 parameters, defined in roles/juice/defaults/main.yml
#-----------------------------------------------------------------
# JUICE
#-----------------------------------------------------------------
juice_cache: /data/juice # JuiceFS shared cache directory
juice_instances: {} # JuiceFS instance definitions
JUICE
This section contains parameters for the juice role,
used by the juice.yml playbook.
juice_cache
Parameter: juice_cache, Type: path, Level: C
Shared local cache directory for all JuiceFS instances, defaults to /data/juice.
JuiceFS isolates cache data by filesystem UUID under this directory, accelerating frequently accessed file reads.
juice_instances
Parameter: juice_instances, Type: dict, Level: I
JuiceFS instance definition dictionary, required parameter, must be explicitly configured at Host level.
Content is JSON/YAML dictionary format, Key is filesystem name (instance ID), Value is instance config object.
juice_instances:
jfs: # Filesystem name
path : /fs # [Required] Mount point path
meta : postgres://u:p@h:5432/db # [Required] Metadata engine URL
data : --storage postgres --bucket ... # Storage backend options
unit : juicefs-jfs # systemd service name
mount : '' # Extra mount options
port : 9567 # Metrics port (must be unique per node)
owner : root # Mount point owner
group : root # Mount point group
mode : '0755' # Mount point permissions
state : create # create | absent
Instance config field descriptions:
| Field | Required | Default | Description |
|---|
path | Yes | - | Mount point path, e.g., /fs, /pgfs |
meta | Yes | - | Metadata engine URL, typically PostgreSQL connection string |
data | No | '' | juicefs format storage backend params |
unit | No | juicefs-<name> | systemd service unit name |
mount | No | '' | juicefs mount extra params |
port | No | 9567 | Prometheus metrics port, must be unique for multi-instance |
owner | No | root | Mount point directory owner |
group | No | root | Mount point directory group |
mode | No | 0755 | Mount point directory permissions |
state | No | create | create to create, absent to remove |
Config Examples:
Using PostgreSQL for metadata and data storage:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port : 9567
Using MinIO for data storage:
juice_instances:
jfs:
path : /fs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage minio --bucket http://10.10.10.10:9000/juice --access-key minioadmin --secret-key minioadmin
port : 9567
Multi-instance config (note unique ports):
juice_instances:
pgfs:
path : /pgfs
meta : postgres://dbuser_meta:[email protected]:5432/meta
data : --storage postgres --bucket 10.10.10.10:5432/meta --access-key dbuser_meta --secret-key DBUser.Meta
port : 9567
shared:
path : /shared
meta : postgres://dbuser_meta:[email protected]:5432/shared
port : 9568 # Must differ from pgfs
owner : postgres
group : postgres
Port Conflict
Multiple JuiceFS instances on the same node must configure different port values, otherwise the playbook will fail during validation.
3 - Playbooks
Use ansible playbooks to manage JuiceFS filesystems, common commands reference.
JUICE module provides one playbook for deploying and removing JuiceFS filesystem instances:
juice.yml
The juice.yml playbook for JuiceFS deployment contains these subtasks:
juice_id : Validate config, check port conflicts
juice_install : Install juicefs package
juice_cache : Create shared cache directory
juice_clean : Clean instances (state=absent)
juice_instance : Create instances (state=create)
- juice_init : Format filesystem
- juice_dir : Create mount point directory
- juice_config: Render config files (triggers restart)
- juice_launch: Start systemd service
juice_register : Register to monitoring system
Operation Levels
juice.yml supports two operation levels:
| Level | Limit Parameter | Description |
|---|
| Node | -l <ip> | Deploy all JuiceFS instances on specified node |
| Instance | -l <ip> -e fsname=<name> | Deploy only single instance on specified node |
Node-Level Operations
Deploy all JuiceFS instances defined on specified node:
./juice.yml -l 10.10.10.10 # Deploy all instances on this node
./juice.yml -l 10.10.10.11 # Deploy on another node
Node-level operations will:
- Install JuiceFS package
- Create shared cache directory
- Format and mount all defined filesystem instances
- Register all instances to monitoring system
Instance-Level Operations
Specify single instance via -e fsname=<name> parameter:
# Deploy only instance named jfs on 10.10.10.10
./juice.yml -l 10.10.10.10 -e fsname=jfs
# Deploy only instance named shared on 10.10.10.11
./juice.yml -l 10.10.10.11 -e fsname=shared
Instance-level operations are useful for:
- Adding new filesystem instances to existing nodes
- Redeploying single failed instance
- Updating single instance configuration
Use -t <tag> to selectively execute tasks:
# Only install package, don't start service
./juice.yml -l 10.10.10.10 -t juice_install
# Only update config and restart instances
./juice.yml -l 10.10.10.10 -t juice_config
# Only update monitoring registration
./juice.yml -l 10.10.10.10 -t juice_register
# Remove instances (requires state: absent in config)
./juice.yml -l 10.10.10.10 -t juice_clean
Idempotency
juice.yml is idempotent, safe to run repeatedly:
juice_init (format) only executes actual formatting when filesystem doesn’t exist- Repeated runs overwrite existing config files
- Config changes trigger restart of corresponding systemd services
- Suitable for batch updates after config changes
Tip: To only update config without restarting all instances, use -t juice_config to render config only.
Removing Instances
To remove JuiceFS instances, two steps are needed:
- Set instance’s
state to absent in config - Execute playbook’s
juice_clean task
# Step 1: Modify config
juice_instances:
jfs:
path : /fs
meta : postgres://...
state : absent # Mark for removal
# Step 2: Execute removal
./juice.yml -l 10.10.10.10 -t juice_clean
# Or remove only specified instance
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean
Removal operations will:
- Stop corresponding systemd service
- Execute
umount -l lazy unmount - Delete systemd service files
- Delete environment config files
- Reload systemd daemon
Note: Removal does not delete metadata and file data in PostgreSQL. For complete cleanup, manually delete the corresponding database.
Quick Reference
Deployment Commands
# Deploy all instances on node
./juice.yml -l <ip>
# Deploy single instance
./juice.yml -l <ip> -e fsname=<name>
# Update config and restart
./juice.yml -l <ip> -t juice_config
# Update only single instance config
./juice.yml -l <ip> -e fsname=<name> -t juice_config
Removal Commands
# Remove all instances marked absent on node
./juice.yml -l <ip> -t juice_clean
# Remove single instance
./juice.yml -l <ip> -e fsname=<name> -t juice_clean
Task Tag Reference
| Tag | Description |
|---|
juice_id | Validate config and port conflicts |
juice_install | Install juicefs package |
juice_cache | Create cache directory |
juice_clean | Remove instances (state=absent) |
juice_instance | Create instances (umbrella tag) |
juice_init | Format filesystem |
juice_dir | Create mount point directory |
juice_config | Render config files |
juice_launch | Start systemd service |
juice_register | Register to VictoriaMetrics |
4 - Administration
JuiceFS filesystem management SOP - create, remove, expand, and troubleshoot
Common JuiceFS management task SOPs:
Basic Operations
Scaling & Maintenance
Troubleshooting
For more issues, see FAQ: JUICE.
Initialize JuiceFS
Use juice.yml playbook to initialize JuiceFS instances:
# Initialize all JuiceFS instances on node
./juice.yml -l 10.10.10.10
# Initialize specific instance
./juice.yml -l 10.10.10.10 -e fsname=jfs
Initialization flow:
- Install
juicefs package - Create shared cache directory
/data/juice - Execute
juicefs format to format filesystem - Create mount point directory and set permissions
- Render systemd service config
- Start service and wait for port ready
- Register to VictoriaMetrics monitoring
Remove JuiceFS
Removing JuiceFS instances requires two steps:
# Step 1: Set state to absent in config
# juice_instances:
# jfs:
# path: /fs
# meta: postgres://...
# state: absent
# Step 2: Execute removal
./juice.yml -l 10.10.10.10 -t juice_clean
# Or remove specific instance
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean
Removal operations will:
- Stop systemd service
- Unmount filesystem (lazy umount)
- Delete service config files
- Reload systemd
Note: Removal does not delete data in PostgreSQL. For complete cleanup, manually handle the database.
Partially execute playbook to reconfigure JuiceFS instances:
./juice.yml -l 10.10.10.10 -t juice_config
Config changes trigger service restart. To render config without restart, manually manage systemd service.
Use JuiceFS Client
Once mounted, JuiceFS is a standard POSIX filesystem:
# Check mount status
df -h /fs
# Check filesystem info
juicefs status /fs
# Check filesystem stats
juicefs info /fs
# List active sessions
juicefs status postgres://dbuser_meta:[email protected]:5432/meta
Common Commands
# Check cache usage
juicefs warmup /fs/some/path
# Warm up cache
juicefs warmup -p 4 /fs/some/path
# Clean local cache
juicefs gc postgres://... --delete
# Export filesystem metadata (backup)
juicefs dump postgres://... > metadata.json
# Import metadata (restore)
juicefs load postgres://... < metadata.json
Add New Instance
Add new JuiceFS instance to node:
# Add new instance in inventory
juice_instances:
jfs:
path : /fs
meta : postgres://...
port : 9567
newfs: # New instance
path : /newfs
meta : postgres://...
port : 9568 # Port must be unique
# Deploy new instance
./juice.yml -l 10.10.10.10 -e fsname=newfs
Multi-Node Shared Mount
JuiceFS supports multi-node mounting of same filesystem for shared storage:
# Multiple nodes configure same metadata URL
app:
hosts:
10.10.10.11: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
10.10.10.12: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
10.10.10.13: { juice_instances: { shared: { path: /shared, meta: "postgres://..." } } }
# Deploy to all nodes
./juice.yml -l app
Note: First-time formatting only needs to run on one node; other nodes automatically skip formatting.
PITR Filesystem Recovery
When using PostgreSQL for metadata and data storage, leverage PostgreSQL PITR to recover filesystem to any point in time:
# 1. Stop all JuiceFS services on all nodes
systemctl stop juicefs-jfs
# 2. Use pgBackRest to restore PostgreSQL to target time
pb restore --stanza=meta --type=time --target="2024-01-15 10:30:00"
# 3. Restart PostgreSQL primary
systemctl start postgresql
# 4. Restart all JuiceFS services on all nodes
systemctl start juicefs-jfs
This enables filesystem recovery to any moment within the backup time range.
Common Issue Diagnosis
Mount Failure Troubleshooting
# Check systemd service status
systemctl status juicefs-jfs
# View service logs
journalctl -u juicefs-jfs -f
# Check mount point
mountpoint /fs
# Manual mount test
juicefs mount postgres://... /fs --foreground
Connection Issue Troubleshooting
# Test metadata engine connection
psql "postgres://dbuser_meta:[email protected]:5432/meta" -c "SELECT 1"
# Check port listening
ss -tlnp | grep 9567
# Test metrics port
curl http://localhost:9567/metrics
Filesystem Issues
# Check filesystem status
juicefs status /fs
# Check filesystem consistency
juicefs fsck postgres://...
# View active sessions
juicefs status postgres://... --session
Cache Optimization
juice_instances:
jfs:
path : /fs
meta : postgres://...
mount : --cache-size 102400 --prefetch 3 # 100GB cache, prefetch 3 blocks
Concurrency Optimization
juice_instances:
jfs:
path : /fs
meta : postgres://...
mount : --max-uploads 50 --max-deletes 10 # Concurrent upload/delete count
Memory Optimization
juice_instances:
jfs:
path : /fs
meta : postgres://...
mount : --buffer-size 300 --open-cache 3600 # Buffer size, open file cache time
Key Metrics to Monitor
Monitor JuiceFS performance via Prometheus metrics:
juicefs_object_request_durations_histogram_seconds: Object storage request latencyjuicefs_blockcache_hits/misses: Cache hit ratejuicefs_fuse_*: FUSE operation statsjuicefs_meta_ops_durations_histogram_seconds: Metadata operation latency
5 - Monitoring
JuiceFS filesystem monitoring metrics and Grafana dashboards
Each JuiceFS instance exposes Prometheus-format metrics on configured port (default 9567).
Monitoring Architecture
JuiceFS Instance (port: 9567)
↓ /metrics
VictoriaMetrics (scrape)
↓
Grafana Dashboard
Pigsty automatically registers JuiceFS instances to VictoriaMetrics, target file located at:
/infra/targets/juice/<hostname>.yml
Key Metrics
Object Storage Metrics
| Metric | Type | Description |
|---|
juicefs_object_request_durations_histogram_seconds | histogram | Object storage request latency distribution |
juicefs_object_request_data_bytes | counter | Object storage data transfer volume |
juicefs_object_request_errors | counter | Object storage request error count |
Cache Metrics
| Metric | Type | Description |
|---|
juicefs_blockcache_hits | counter | Block cache hit count |
juicefs_blockcache_misses | counter | Block cache miss count |
juicefs_blockcache_writes | counter | Block cache write count |
juicefs_blockcache_drops | counter | Block cache drop count |
juicefs_blockcache_evictions | counter | Block cache eviction count |
juicefs_blockcache_hit_bytes | counter | Cache hit bytes |
juicefs_blockcache_miss_bytes | counter | Cache miss bytes |
| Metric | Type | Description |
|---|
juicefs_meta_ops_durations_histogram_seconds | histogram | Metadata operation latency distribution |
juicefs_transaction_durations_histogram_seconds | histogram | Transaction latency distribution |
juicefs_transaction_restart | counter | Transaction retry count |
FUSE Operation Metrics
| Metric | Type | Description |
|---|
juicefs_fuse_ops_durations_histogram_seconds | histogram | FUSE operation latency distribution |
juicefs_fuse_read_size_bytes | histogram | Read operation size distribution |
juicefs_fuse_written_size_bytes | histogram | Write operation size distribution |
Filesystem Metrics
| Metric | Type | Description |
|---|
juicefs_used_space | gauge | Used space (bytes) |
juicefs_used_inodes | gauge | Used inodes |
Common PromQL
Cache Hit Rate
rate(juicefs_blockcache_hits[5m]) /
(rate(juicefs_blockcache_hits[5m]) + rate(juicefs_blockcache_misses[5m]))
Object Storage P99 Latency
histogram_quantile(0.99, rate(juicefs_object_request_durations_histogram_seconds_bucket[5m]))
histogram_quantile(0.99, rate(juicefs_meta_ops_durations_histogram_seconds_bucket[5m]))
Read/Write Throughput
# Read throughput
rate(juicefs_blockcache_hit_bytes[5m]) + rate(juicefs_blockcache_miss_bytes[5m])
# Write throughput
rate(juicefs_fuse_written_size_bytes_sum[5m])
Metrics Scrape Config
JuiceFS instance VictoriaMetrics target file format:
# /infra/targets/juice/<hostname>.yml
- labels: { ip: 10.10.10.10, ins: "node-jfs", cls: "jfs" }
targets: [ 10.10.10.10:9567 ]
To manually re-register:
./juice.yml -l <ip> -t juice_register
6 - FAQ
Frequently asked questions about the JUICE module
Port conflict - what to do?
Multiple JuiceFS instances on the same node must configure different port values. If you encounter port conflict error:
juice_instances have port conflicts: [9567, 9567]
Assign unique ports to each instance in config:
juice_instances:
fs1:
path: /fs1
meta: postgres://...
port: 9567
fs2:
path: /fs2
meta: postgres://...
port: 9568 # Must be different
How to add new instance?
- Add new instance definition in config
- Execute playbook specifying new instance name
./juice.yml -l 10.10.10.10 -e fsname=newfs
How to remove instance?
- Set instance’s
state to absent in config - Execute
juice_clean task
./juice.yml -l 10.10.10.10 -e fsname=jfs -t juice_clean
Where is filesystem data stored?
Depends on data parameter config:
- PostgreSQL Large Objects: Data stored in PostgreSQL’s
pg_largeobject table - MinIO/S3: Data stored in specified bucket in object storage
Metadata is always stored in the PostgreSQL database specified by meta parameter.
What storage backends are supported?
JuiceFS supports multiple storage backends. Common ones in Pigsty:
postgres: PostgreSQL large object storageminio: MinIO object storages3: AWS S3 or S3-compatible storage
See JuiceFS official docs for full list.
Can I mount same filesystem on multiple nodes?
Yes. Just configure the same meta URL on multiple nodes; JuiceFS handles concurrent access automatically.
First-time formatting only needs to run on one node; other nodes automatically skip formatting.
How to use PITR to recover filesystem?
When using PostgreSQL for metadata and data storage:
- Stop all JuiceFS services
- Use pgBackRest to restore PostgreSQL to target point in time
- Restart PostgreSQL and JuiceFS services
See Administration: PITR Filesystem Recovery for detailed steps.
Can cache directory be customized?
Yes, via juice_cache parameter:
juice_cache: /data/juice # Default
# or
juice_cache: /ssd/juice # Use SSD for cache
Pass extra juicefs mount parameters via instance’s mount field:
juice_instances:
jfs:
path : /fs
meta : postgres://...
mount : --cache-size 102400 --prefetch 3
Common options:
| Option | Description |
|---|
--cache-size | Local cache size (MB) |
--prefetch | Prefetch block count |
--buffer-size | Read/write buffer size (MB) |
--max-uploads | Max concurrent uploads |
--open-cache | Open file cache time (seconds) |