This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Module: MINIO

Pigsty has built-in MinIO support. MinIO is an S3 OSS alternative which is used as an optional PostgreSQL backup repo

Min.IO: S3-Compatible Open-Source Multi-Cloud Object Storage

MinIO is an S3-compatible object storage server. It’s designed to be scalable, secure, and easy to use. It has native multi-node multi-driver HA support and can store documents, pictures, videos, and backups.

Pigsty uses MinIO as an optional PostgreSQL backup storage repo, in addition to the default local posix FS repo. If the MinIO repo is used, the MINIO module should be installed before any PGSQL modules. MinIO requires a trusted CA to work, so you have to install it in addition to NODE module.

1 - Usage

Get started with MinIO and MCli, how to access the MinIO service?

After MinIO cluster is configured and deployed with the playbook, you can start using and accessing the MinIO cluster by following the instructions here.


Deploy Cluster

It is straightforward to deploy a single-node MinIO instance with Pigsty.

minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

Define it in the config inventory, then run the playbook:

./minio.yml -l minio

The install.yml playbook will automatically create the MinIO cluster defined in the inventory, so you don’t need to run the minio.yml playbook manually, if you choose the default one-pass installation.

If you plan to deploy a production-grade large-scale multi-node MinIO cluster, we strongly recommend you to read the Pigsty MinIO configuration document and the MinIO document before proceeding.


Access Cluster

You have to access MinIO via HTTPS, so make sure the default minio service domain (sss.pigsty) point to the right place:

  1. You can add static resolution records in node_etc_hosts or manually modify the /etc/hosts file
  2. You can add a record on the internal DNS server if you are using an DNS service
  3. You can add a record in dns_records if you are using the DNSMASQ on infra nodes

It is recommended to use the first method: static DNS resolution records to avoid MinIO’s additional dependency on DNS in production environments.

You have to point the MinIO service domain to the IP address and service port of the MinIO server node, or the IP address and service port of the load balancer. Pigsty will use the default domain name sss.pigsty and default port 9000.

For example, if you are using haproxy to expose MinIO service like this, the port may be 9002.


Adding Alias

To access the MinIO server cluster using the mcli client, you need to configure the server alias first:

mcli alias ls  # list minio alias (the default is sss)
mcli alias set sss https://sss.pigsty:9000 minioadmin minioadmin              # root user
mcli alias set sss https://sss.pigsty:9002 minioadmin minioadmin              # root user, on load balancer port 9002

mcli alias set pgbackrest https://sss.pigsty:9000 pgbackrest S3User.Backup    # use another user

There’s a pre-configured MinIO alias named sss on the admin user of the admin node, you can use it directly.

For the full functionality of the MinIO client tool mcli, please refer to the documentation: MinIO Client.


Manage User

You can manage biz users in MinIO using mcli, for example, you can create the two default biz users using the command line:

mcli admin user list sss     # list all users 
set +o history               # hide shell history
mcli admin user add sss dba S3User.DBA
mcli admin user add sss pgbackrest S3User.Backup
set -o history 

Manage Bucket

You can manage bucket with mcli:

mcli ls sss/                         # list all bucket on 'sss'
mcli mb --ignore-existing sss/hello  # create a bucket named 'hello'
mcli rb --force sss/hello            # delete the 'hello' bucket

Mange Object

You can perform object CRUD with cli, for example:

mcli cp /www/pigsty/* sss/infra/     # upload local repo content to infra bucket 
mcli cp sss/infra/plugins.tgz /tmp/  # download file to local from minio
mcli ls sss/infra                    # list all files in the infra bucket
mcli rm sss/infra/plugins.tgz        # delete file in infra bucket  
mcli cat sss/infra/repo_complete     # output the content of 

Check the Tutorial: Object Management for detail


Use rclone

Pigsty repo has rclone available, a convenient cloud object storage client that you can use to access MinIO services.

yum install rclone; # el compatible
dnf install rclone; # debian/ubuntu

mkdir -p ~/.config/rclone/;
tee ~/.config/rclone/rclone.conf > /dev/null <<EOF
[sss]
type = s3
access_key_id = minioadmin
secret_access_key = minioadmin
endpoint = sss.pigsty:9000
EOF

rclone ls sss:/

Backup Repo

The MinIO is used as a backup repository for pgBackRest by default in Pigsty. When you modify the pgbackrest_method to minio, the PGSQL module will automatically switch the backup repository to MinIO.

pgbackrest_method: local          # pgbackrest repo method: local,minio,[user-defined...]
pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  local:                          # default pgbackrest repo with local posix fs
    path: /pg/backup              # local backup directory, `/pg/backup` by default
    retention_full_type: count    # retention full backups by count
    retention_full: 2             # keep 2, at most 3 full backup when using local fs repo
  minio:                          # optional minio repo for pgbackrest
    type: s3                      # minio is s3-compatible, so s3 is used
    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    s3_key: pgbackrest            # minio user access key for pgbackrest
    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    s3_uri_style: path            # use path style uri for minio rather than host style
    path: /pgbackrest             # minio backup path, default is `/pgbackrest`
    storage_port: 9000            # minio port, 9000 by default
    storage_ca_file: /pg/cert/ca.crt  # minio ca file path, `/pg/cert/ca.crt` by default
    bundle: y                     # bundle small files into a single file
    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    retention_full_type: time     # retention full backup by time on minio repo
    retention_full: 14            # keep full backup for last 14 days

Beware that if you are using MinIO through load balancer, you should use the corresponding domain name and port number here.




2 - Configuration

Configure MinIO clusters according to your needs, and access service through LB & Proxy

Configuration

You have to define a MinIO cluster in the config inventory before deploying it.

There are 3 major deployment modes for MinIO clusters:

We recommend using SNSD and MNMD for development and production deployment, respectively, and SNMD only when resources are extremely limited (only one server).

Besides, you can use multi-pool deployment to scale an existing MinIO cluster, or directly deploy multiple clusters.

When using a multi-node MinIO cluster, you can access the service from any node, so the best practice is to use a load balancer and HA access.


Core Param

There’s one and only one core param for MinIO deployment, which is MINIO_VOLUMES, which specify the nodes, drivers, pools of a minio cluster

Pigsty will auto-generate MINIO_VOLUMES according to the config inventory for you, but you can always override it directly. If not explicitly specified, Pigsty will generate it according to the following rules:

  • SNSD: MINIO_VOLUMES is point to any dir on local node, from minio_data
  • SNMD: MINIO_VOLUMES is point to a series of real driver on local node, from minio_data
  • MNMD: MINIO_VOLUMES is point to multiple node & multiple drivers, according to minio_data and minio_node
    • Use minio_data to specify drivers on each node, such as /data{1...4}
    • Use minio_node to specify node name pattern, such as ${minio_cluster}-${minio_seq}.pigsty
  • Multi-Pool: MINIO_VOLUMES need to be explicitly specified

Single-Node Single-Drive

Tutorial: deploy-minio-single-node-single-drive

To define a singleton MinIO instance, it’s straightforward:

# 1 Node 1 Driver (DEFAULT)
minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

The only required params are minio_seq and minio_cluster, which generate a unique identity for each MinIO instance.

Single-Node Single-Driver mode is for dev purposes, so you can use a common dir as the data dir. The default data dir for SNSD minio is specified by minio_data, which is /data/minio by default. Beware that in multi-driver or multi-node mode, MinIO will refuse to start if using a common dir as the data dir rather than a mount point.

We strongly recommend using a static domain name record to access MinIO. For example, the default sss.pigsty if minio_domain can be added to all nodes through:

node_etc_hosts: ["10.10.10.10 sss.pigsty"] # domain name to access minio from all nodes (required)

Single-Node Multi-Drive

Reference: deploy-minio-single-node-multi-drive

To use multiple disks on a single node, you have to specify the minio_data in the format of {{ prefix }}{x...y}, which defines a series of disk mount points.

minio:
  hosts: { 10.10.10.10: { minio_seq: 1 } }
  vars:
    minio_cluster: minio         # minio cluster name, minio by default
    minio_data: '/data{1...4}'   # minio data dir(s), use {x...y} to specify multi drivers

This example defines a single-node MinIO cluster with 4 drivers: /data1, /data2, /data3, /data4. You have to mount them properly before launching MinIO:

The vagrant MinIO sandbox has pre-defined 4-node MinIO cluster with 4 drivers. You have to properly mount them before starting MinIO (be sure to format disks with xfs):

mkfs.xfs /dev/vdb; mkdir /data1; mount -t xfs /dev/sdb /data1;
mkfs.xfs /dev/vdc; mkdir /data2; mount -t xfs /dev/sdb /data2;
mkfs.xfs /dev/vdd; mkdir /data3; mount -t xfs /dev/sdb /data3;
mkfs.xfs /dev/vde; mkdir /data4; mount -t xfs /dev/sdb /data4;

Disk management is beyond this topic, just make sure your /etc/fstab is properly configured to auto-mount disks after reboot.

/dev/vdb /data1 xfs defaults,noatime,nodiratime 0 0
/dev/vdc /data2 xfs defaults,noatime,nodiratime 0 0
/dev/vdd /data3 xfs defaults,noatime,nodiratime 0 0
/dev/vde /data4 xfs defaults,noatime,nodiratime 0 0

SNMD mode can utilize multiple disks on a single server to provide higher performance and capacity, and tolerate partial disk failures.

But it can do nothing with node failure, and you can’t add new nodes at runtime, so we don’t recommend using SNMD mode in production unless you have a special reason.


Multi-Node Multi-Drive

Reference: deploy-minio-multi-node-multi-drive

The extra minio_node param will be used for a multi-node deployment in addition to the minio_data

For example, this configuration defines a 4-node MinIO cluster with 4 drivers per node:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }  # nodename: minio-1.pigsty
    10.10.10.11: { minio_seq: 2 }  # nodename: minio-2.pigsty
    10.10.10.12: { minio_seq: 3 }  # nodename: minio-3.pigsty
    10.10.10.13: { minio_seq: 4 }  # nodename: minio-4.pigsty
  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'                         # 4-disk per node
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio name pattern

The minio_node param specifies the MinIO node name pattern, which is ${minio_cluster}-${minio_seq}.pigsty by default. The server name is very important for MinIO to identify and access other nodes in the cluster. It will be populated with minio_cluster and minio_seq, and write to /etc/hosts of all minio cluster members.

In this case, the MINIO_VOLUMES will be set to https://minio-{1...4}.pigsty/data{1...4} to identify the 16 disks on 4 nodes.


Multi-Pool

MinIO’s architecture allows for cluster expansion by adding new storage pools. In Pigsty, you can achieve this by explicitly specifying the minio_volumes param to specify nodes/disks for each pool.

For example, suppose you have already created a MinIO cluster as defined in the Multi-Node Multi-Disk example, and now you want to add a new storage pool consisting of four nodes.

You can specify minio_volumes here to allocate nodes for each pool to scale out the cluster.

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
    10.10.10.13: { minio_seq: 4 }
    
    10.10.10.14: { minio_seq: 5 }
    10.10.10.15: { minio_seq: 6 }
    10.10.10.16: { minio_seq: 7 }
    10.10.10.17: { minio_seq: 8 }
  vars:
    minio_cluster: minio
    minio_data: "/data{1...4}"
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio 节点名称规则
    minio_volumes: 'https://minio-{1...4}.pigsty:9000/data{1...4} https://minio-{5...8}.pigsty:9000/data{1...4}'

Here, the two space-separated parameters represent two storage pools, each with four nodes and four disks per node.

For more information on storage pools, please refer to Management Plan: MinIO Cluster Expansion.


Multiple Clusters

You can deploy new MinIO nodes as a completely new MinIO cluster by defining a new group with a different cluster name.

The following configuration declares two independent MinIO clusters:

minio1:
  hosts:
    10.10.10.10: { minio_seq: 1 }
    10.10.10.11: { minio_seq: 2 }
    10.10.10.12: { minio_seq: 3 }
    10.10.10.13: { minio_seq: 4 }
  vars:
    minio_cluster: minio2
    minio_data: "/data{1...4}"

minio2:
  hosts:    
    10.10.10.14: { minio_seq: 5 }
    10.10.10.15: { minio_seq: 6 }
    10.10.10.16: { minio_seq: 7 }
    10.10.10.17: { minio_seq: 8 }
  vars:
    minio_cluster: minio2
    minio_data: "/data{1...4}"
    minio_alias: sss2
    minio_domain: sss2.pigsty
    minio_endpoint: sss2.pigsty:9000

Please note that by default, Pigsty allows only one MinIO cluster per deployment. If you need to deploy multiple MinIO clusters, some parameters with default values need to be explicitly set and cannot be omitted to avoid naming conflicts, as shown above.


Expose Service

MinIO will serve on port 9000 by default. If a multi-node MinIO cluster is deployed, you can access its service via any node. It would be better to expose MinIO service via a load balancer, such as the default haproxy on NODE, or use the L2 vip.

To expose MinIO service with haproxy, you have to define an extra service with haproxy_services:

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
    10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
    10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
  vars:
    minio_cluster: minio
    node_cluster: minio
    minio_data: '/data{1...2}'         # use two disk per node
    minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
    haproxy_services:                  # EXPOSING MINIO SERVICE WITH HAPROXY
      - name: minio                    # [REQUIRED] service name, unique
        port: 9002                     # [REQUIRED] service port, unique
        options:                       # [OPTIONAL] minio health check
          - option httpchk
          - option http-keep-alive
          - http-check send meth OPTIONS uri /minio/health/live
          - http-check expect status 200
        servers:
          - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

MinIO uses port 9000 by default. A multi-node MinIO cluster can be accessed by connecting to any one of its nodes.

Service access falls under the scope of the NODE module, and we’ll provide only a basic introduction here.

High-availability access to a multi-node MinIO cluster can be achieved using an L2 VIP or HAProxy. For example, you can use Keepalived to bind an L2 VIP to the MinIO cluster, or use the haproxy component provided by the NODE module to expose MinIO services through a load balancer.

# minio cluster with 4 nodes and 4 drivers per node
minio:
  hosts:
    10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
    10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
    10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
    10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'
    minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
    minio_users:
      - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: S3User.SomeNewPassWord , policy: readwrite }

    # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
    node_cluster: minio
    vip_enabled: true
    vip_vrid: 128
    vip_address: 10.10.10.9
    vip_interface: eth1

    # expose minio service with haproxy on all nodes
    haproxy_services:
      - name: minio                    # [REQUIRED] service name, unique
        port: 9002                     # [REQUIRED] service port, unique
        balance: leastconn             # [OPTIONAL] load balancer algorithm
        options:                       # [OPTIONAL] minio health check
          - option httpchk
          - option http-keep-alive
          - http-check send meth OPTIONS uri /minio/health/live
          - http-check expect status 200
        servers:
          - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

In the configuration above, HAProxy is enabled on all nodes of the MinIO cluster, exposing MinIO services on port 9002, and a Layer 2 VIP is bound to the cluster. When in use, users should point the sss.pigsty domain name to the VIP address 10.10.10.9 and access MinIO services using port 9002. This ensures high availability, as the VIP will automatically switch to another node if any node fails.

In this scenario, you may also need to globally modify the destination of domain name resolution and adjust the minio_endpoint parameter to change the endpoint address corresponding to the MinIO alias on the management node:

minio_endpoint: https://sss.pigsty:9002   # Override the default https://sss.pigsty:9000
node_etc_hosts: ["10.10.10.9 sss.pigsty"] # Other nodes will use the sss.pigsty domain

Dedicate Proxies

Pigsty allow using dedicate load balancer cluster instead of the node cluster itself to run VIP & HAProxy.

For example, the prod template uses this way.

proxy:
  hosts:
    10.10.10.18 : { nodename: proxy1 ,node_cluster: proxy ,vip_interface: eth1 ,vip_role: master }
    10.10.10.19 : { nodename: proxy2 ,node_cluster: proxy ,vip_interface: eth1 ,vip_role: backup }
  vars:
    vip_enabled: true
    vip_address: 10.10.10.20
    vip_vrid: 20
    
    haproxy_services:      # expose minio service : sss.pigsty:9000
      - name: minio        # [REQUIRED] service name, unique
        port: 9000         # [REQUIRED] service port, unique
        balance: leastconn # Use leastconn algorithm and minio health check
        options: [ "option httpchk", "option http-keep-alive", "http-check send meth OPTIONS uri /minio/health/live", "http-check expect status 200" ]
        servers:           # reload service with ./node.yml -t haproxy_config,haproxy_reload
          - { name: minio-1 ,ip: 10.10.10.21 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.22 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.23 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-4 ,ip: 10.10.10.24 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-5 ,ip: 10.10.10.25 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

In this case, you need to manually configure the DNS resolution to point sss.pigsty to the VIP address of dedicate proxies cluster

minio_endpoint: https://sss.pigsty:9002    # overwrite the defaults: https://sss.pigsty:9000
node_etc_hosts: ["10.10.10.20 sss.pigsty"] # domain name to access minio from all nodes (required)

Access Service

To use the exposed service, you have to update/append the MinIO credential in the pgbackrest_repo section:

# This is the newly added HA MinIO Repo definition, USE THIS INSTEAD!
minio_ha:
  type: s3
  s3_endpoint: minio-1.pigsty   # s3_endpoint could be any load balancer: 10.10.10.1{0,1,2}, or domain names point to any of the 3 nodes
  s3_region: us-east-1          # you could use external domain name: sss.pigsty , which resolve to any members  (`minio_domain`)
  s3_bucket: pgsql              # instance & nodename can be used : minio-1.pigsty minio-1.pigsty minio-1.pigsty minio-1 minio-2 minio-3
  s3_key: pgbackrest            # Better using a new password for MinIO pgbackrest user
  s3_key_secret: S3User.SomeNewPassWord
  s3_uri_style: path
  path: /pgbackrest
  storage_port: 9002            # Use the load balancer port 9002 instead of default 9000 (direct access)
  storage_ca_file: /etc/pki/ca.crt
  bundle: y
  cipher_type: aes-256-cbc      # Better using a new cipher password for your production environment
  cipher_pass: pgBackRest.With.Some.Extra.PassWord.And.Salt.${pg_cluster}
  retention_full_type: time
  retention_full: 14

Expose Console

MinIO has a built-in console that can be accessed via HTTPS @ minio_admin_port. If you want to expose the MinIO console to the outside world, you can add MinIO to infra_portal.

# ./infra.yml -t nginx
infra_portal:
  home         : { domain: h.pigsty }
  grafana      : { domain: g.pigsty ,endpoint: "${admin_ip}:3000" , websocket: true }
  prometheus   : { domain: p.pigsty ,endpoint: "${admin_ip}:9090" }
  alertmanager : { domain: a.pigsty ,endpoint: "${admin_ip}:9093" }
  blackbox     : { endpoint: "${admin_ip}:9115" }
  loki         : { endpoint: "${admin_ip}:3100" }

  # MinIO console require HTTPS / Websocket to work
  minio        : { domain: m.pigsty     ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
  minio10      : { domain: m10.pigsty   ,endpoint: "10.10.10.10:9001" ,scheme: https ,websocket: true }
  minio11      : { domain: m11.pigsty   ,endpoint: "10.10.10.11:9001" ,scheme: https ,websocket: true }
  minio12      : { domain: m12.pigsty   ,endpoint: "10.10.10.12:9001" ,scheme: https ,websocket: true }
  minio13      : { domain: m13.pigsty   ,endpoint: "10.10.10.13:9001" ,scheme: https ,websocket: true }

Beware that MinIO console should be accessed via HTTPS, please DO NOT expose MinIO console without encryption in production.

Which means you usually need to add m.pigsty resolution to your DNS server, or /etc/hosts on your local host, to access the MinIO console.

Meanwhile, if you are using Pigsty’s self-signed CA rather than a regular public CA, you usually need to manually trust the CA or certificate to skip the “insecure” warning in the browser.




3 - Parameters

MinIO has 15 parameters to customize the cluster as needed.

MinIO is a S3 compatible object storage service. Which is used as an optional central backup storage repo for PostgreSQL.

You may also use it for other purposes, such as storing large files, documents, pictures & videos.


Parameters

There are 15 related parameters in MinIO module:

Parameter Type Level Comment
minio_seq int I minio instance identifier, REQUIRED
minio_cluster string C minio cluster name, minio by default
minio_clean bool G/C/A cleanup minio during init?, false by default
minio_user username C minio os user, minio by default
minio_node string C minio node name pattern
minio_data path C minio data dir(s), use {x…y} to specify multi drivers
minio_volumes string C minio core parameter, specify nodes and disks, auto-gen by default
minio_domain string G minio external domain name, sss.pigsty by default
minio_port port C minio service port, 9000 by default
minio_admin_port port C minio console port, 9001 by default
minio_access_key username C root access key, minioadmin by default
minio_secret_key password C root secret key, minioadmin by default
minio_extra_vars string C extra environment variables for minio server
minio_alias string G alias name for local minio deployment
minio_endpoint string C corresponding host:port for above minio alias
minio_buckets bucket[] C list of minio bucket to be created
minio_users user[] C list of minio user to be created

The minio_volumes and minio_endpoint are auto-generated parameters, but you can explicitly override these two parameters.


Defaults

The default parameters of MinIO is defined in roles/minio/defaults/main.yml

#-----------------------------------------------------------------
# MINIO
#-----------------------------------------------------------------
#minio_seq: 1                     # minio instance identifier, REQUIRED
minio_cluster: minio              # minio cluster identifier, REQUIRED
minio_clean: false                # cleanup minio during init?, false by default
minio_user: minio                 # minio os user, `minio` by default
minio_node: '${minio_cluster}-${minio_seq}.pigsty' # minio node name pattern
minio_data: '/data/minio'         # minio data dir(s), use {x...y} to specify multi drivers
#minio_volumes:                   # minio data volumes, override defaults if specified
minio_domain: sss.pigsty          # minio external domain name, `sss.pigsty` by default
minio_port: 9000                  # minio service port, 9000 by default
minio_admin_port: 9001            # minio console port, 9001 by default
minio_access_key: minioadmin      # root access key, `minioadmin` by default
minio_secret_key: minioadmin      # root secret key, `minioadmin` by default
minio_extra_vars: ''              # extra environment variables
minio_alias: sss                  # alias name for local minio deployment
#minio_endpoint: https://sss.pigsty:9000 # if not specified, overwritten by defaults
minio_buckets: [ { name: pgsql }, { name: infra },  { name: redis } ]
minio_users:
  - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
  - { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }

minio_seq

name: minio_seq, type: int, level: I

minio instance identifier, REQUIRED identity parameters. no default value, you have to assign it manually


minio_cluster

name: minio_cluster, type: string, level: C

minio cluster name, minio by default. This is useful when deploying multiple MinIO clusters


minio_clean

name: minio_clean, type: bool, level: G/C/A

cleanup minio during init?, false by default


minio_user

name: minio_user, type: username, level: C

minio os user name, minio by default


minio_node

name: minio_node, type: string, level: C

minio node name pattern, this is used for multi-node deployment

default values: ${minio_cluster}-${minio_seq}.pigsty


minio_data

name: minio_data, type: path, level: C

minio data dir(s)

default values: /data/minio, which is a common dir for single-node deployment.

For a multi-drive deployment, you can use {x...y} notion to specify multi drivers.


minio_volumes

name: minio_volumes, type: string, level: C

The only core parameter of MinIO, if not specified, it will be auto-generated by the following rule:

minio_volumes: "{% if minio_cluster_size|int > 1 %}https://{{ minio_node|replace('${minio_cluster}', minio_cluster)|replace('${minio_seq}',minio_seq_range) }}:{{ minio_port|default(9000) }}{% endif %}{{ minio_data }}"
  • In case of SNSD or SNMD deployment, minio_volumes directly uses the value of minio_data
  • In case of MNMD deployment, minio_volumes uses the values of minio_node, minio_port, minio_data to generate this param:
  • In case of multiple storage pool, you have to override minio_volumes to specify multiple node pools explicitly.

It user’s responsibility to make sure the parameters used in minio_volumes are consistent with minio_node, minio_port, minio_data.


minio_domain

name: minio_domain, type: string, level: G

minio service domain name, sss.pigsty by default.

The client can access minio S3 service via this domain name. This name will be registered to local DNSMASQ and included in SSL certs.


minio_port

name: minio_port, type: port, level: C

minio service port, 9000 by default


minio_admin_port

name: minio_admin_port, type: port, level: C

minio console port, 9001 by default


minio_access_key

name: minio_access_key, type: username, level: C

root access key, minioadmin by default


minio_secret_key

name: minio_secret_key, type: password, level: C

root secret key, minioadmin by default

default values: minioadmin

PLEASE CHANGE THIS IN YOUR DEPLOYMENT


minio_extra_vars

name: minio_extra_vars, type: string, level: C

extra environment variables for minio server. Check Minio Server for the complete list.

default value is empty string, you can use multiline string to passing multiple environment variables.


minio_alias

name: minio_alias, type: string, level: G

MinIO alias name for the local MinIO cluster

default values: sss, which will be written to infra nodes’ / admin users’ client alias profile.


minio_endpoint

name: minio_endpoint, type: string, level: C

The corresponding host:port for the above MinIO alias. This parameter is not defined by default.

If not defined, it will be overwritten by the following default value:

mcli alias set {{ minio_alias }} {% if minio_endpoint is defined and minio_endpoint != '' %}{{ minio_endpoint }}{% else %}https://{{ minio_domain }}:{{ minio_port }}{% endif %} {{ minio_access_key }} {{ minio_secret_key }}

This alias & endpoint will be added to the admin user on the admin node.


minio_buckets

name: minio_buckets, type: bucket[], level: C

list of minio bucket to be created by default:

minio_buckets: [ { name: pgsql }, { name: infra },  { name: redis } ]

Three default buckets are created for module PGSQL, INFRA, and REDIS


minio_users

name: minio_users, type: user[], level: C

list of minio user to be created, default value:

minio_users:
  - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
  - { access_key: pgbackrest , secret_key: S3User.Backup, policy: readwrite }

Two default users are created for PostgreSQL DBA and pgBackREST.




4 - Playbook

How to manage MinIO cluster with ansible playbooks

You have to configure minio cluster in the config inventory before running the playbook.


Playbook

There’s a built-in playbook: minio.yml for installing the MinIO cluster.

minio.yml

minio.yml

  • minio-id : generate minio identity
  • minio_install : install minio/mcli
    • minio_os_user : create os user minio
    • minio_pkg : install minio/mcli package
    • minio_clean : remove minio data (not default)
    • minio_dir : create minio directories
  • minio_config : generate minio config
    • minio_conf : minio main config
    • minio_cert : minio ssl cert
    • minio_dns : write minio dns records
  • minio_launch : launch minio service
  • minio_register : register minio to prometheus
  • minio_provision : create minio aliases/buckets/users
    • minio_alias : create minio client alias
    • minio_bucket : create minio buckets
    • minio_user : create minio biz users

Trusted ca file: /etc/pki/ca.crt should exist on all nodes already. which is generated in role: ca and loaded & trusted by default in role: node.

You should install MINIO module on Pigsty-managed nodes (i.e., Install NODE first)

asciicast


Commands

MINIO Playbook cheatsheet and common commands

./minio.yml -l <cls>                      # init MINIO module on group <cls>
./minio.yml -l minio -e minio_clean=true  # init MINIO, and remove existing MinIO & Data (DANGEROUS!)
./minio.yml -l minio -e minio_clean=true -t minio_clean # Remove existing MinIO & Data (DANGEROUS!)
./minio.yml -l minio -t minio_instal      # install MinIO, setup dirs, without configure & launch
./minio.yml -l minio -t minio_config      # generate MinIO config & certs
./minio.yml -l minio -t minio_launch      # restart MinIO cluster

Safeguard

Minio has a safeguard to prevent accidental deletion. Control by the following parameter:

  • minio_clean, false by default, which means Pigsty will not remove existing MinIO data by default

If you wish to remove existing minio data during init, please set this parameter to true in the config file, or override it with command-line parameter -e minio_clean=true.

./minio.yml -l <cls> -e minio_clean=true

If you just want to clean existing MinIO data without installing a new instance, simply execute the minio_clean subtask:

./minio.yml -l <cls> -e minio_clean=true -t minio_clean

Demo

asciicast




5 - Administration

Admin SOP, create & remove MinIO clusters and members

Here are some administration SOP for MinIO:

Check ETCD: FAQ for more questions.


Create Cluster

To create a MinIO cluster, define the minio cluster in inventory first:

minio: { hosts: { 10.10.10.10: { minio_seq: 1 } }, vars: { minio_cluster: minio } }

The minio_cluster param mark this cluster as a MinIO cluster, and the minio_seq is the sequence number of the MinIO node, which is used to generate MinIO node name like minio-1, minio-2, etc.

This snippet defines a single-node MinIO cluster, using the following command to create the MinIO cluster:

./minio.yml -l minio  # init MinIO module on the minio group 

Remove Cluster

To destroy an existing MinIO cluster, use the minio_clean subtask of minio.yml, DO THINK before you type.

./minio.yml -l minio -t minio_clean -e minio_clean=true   # Stop MinIO and Remove Data Dir 

If you wish to remove prometheus monitor target too:

ansible infra -b -a 'rm -rf /etc/prometheus/targets/minio/minio-1.yml'  # delete the minio monitoring target 

Expand Cluster

You can not scale MinIO at node/disk level, but you can scale at storage pool (multiple nodes) level.

Assume you have a 4-node MinIO cluster and want to double the capacity by adding another four-node storage pool.

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
    10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
    10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
    10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'
    minio_buckets: [ { name: pgsql }, { name: infra }, { name: redis } ]
    minio_users:
      - { access_key: dba , secret_key: S3User.DBA, policy: consoleAdmin }
      - { access_key: pgbackrest , secret_key: S3User.SomeNewPassWord , policy: readwrite }

    # bind a node l2 vip (10.10.10.9) to minio cluster (optional)
    node_cluster: minio
    vip_enabled: true
    vip_vrid: 128
    vip_address: 10.10.10.9
    vip_interface: eth1

    # expose minio service with haproxy on all nodes
    haproxy_services:
      - name: minio                    # [REQUIRED] service name, unique
        port: 9002                     # [REQUIRED] service port, unique
        balance: leastconn             # [OPTIONAL] load balancer algorithm
        options:                       # [OPTIONAL] minio health check
          - option httpchk
          - option http-keep-alive
          - http-check send meth OPTIONS uri /minio/health/live
          - http-check expect status 200
        servers:
          - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
          - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

Step 1, add 4 node definition in the group, allocate sequence number 5 to 8. The key step is to modify the minio_volumes param, assign the new 4 nodes to a new storage pool.

minio:
  hosts:
    10.10.10.10: { minio_seq: 1 , nodename: minio-1 }
    10.10.10.11: { minio_seq: 2 , nodename: minio-2 }
    10.10.10.12: { minio_seq: 3 , nodename: minio-3 }
    10.10.10.13: { minio_seq: 4 , nodename: minio-4 }
    # new nodes
    10.10.10.14: { minio_seq: 5 , nodename: minio-5 }
    10.10.10.15: { minio_seq: 6 , nodename: minio-6 }
    10.10.10.16: { minio_seq: 7 , nodename: minio-7 }
    10.10.10.17: { minio_seq: 8 , nodename: minio-8 }

  vars:
    minio_cluster: minio
    minio_data: '/data{1...4}'
    minio_volumes: 'https://minio-{1...4}.pigsty:9000/data{1...4} https://minio-{5...8}.pigsty:9000/data{1...4}'  # 新增的集群配置
    # misc params

Step 2, adding these nodes to Pigsty:

./node.yml -l 10.10.10.14,10.10.10.15,10.10.10.16,10.10.10.17

Step 3, Provisioning MinIO on new nodes with minio_install subtask (user, dir, pkg, …):

./minio.yml -l 10.10.10.14,10.10.10.15,10.10.10.16,10.10.10.17 -t minio_install

Step 4: Reconfigure the entire MinIO cluster on the whole cluster with minio_config subtask

./minio.yml -l minio -t minio_config

That is to say, the existing 4-nodes’ MINIO_VOLUMES configuration will be updated, too

Step 5: Restart the entire MinIO cluster simultaneously (be careful, do not rolling restart!):

./minio.yml -l minio -t minio_launch -f 10   # with 10 parallel

Step 6: This is optional, if you are using a load balancer, make sure the load balancer configuration is updated.

For example, add the new four nodes to the load balancer configuration:

# expose minio service with haproxy on all nodes
haproxy_services:
  - name: minio                    # [REQUIRED] service name, unique
    port: 9002                     # [REQUIRED] service port, unique
    balance: leastconn             # [OPTIONAL] load balancer algorithm
    options:                       # [OPTIONAL] minio health check
      - option httpchk
      - option http-keep-alive
      - http-check send meth OPTIONS uri /minio/health/live
      - http-check expect status 200
    servers:
      - { name: minio-1 ,ip: 10.10.10.10 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-2 ,ip: 10.10.10.11 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-3 ,ip: 10.10.10.12 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-4 ,ip: 10.10.10.13 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

      - { name: minio-5 ,ip: 10.10.10.14 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-6 ,ip: 10.10.10.15 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-7 ,ip: 10.10.10.16 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }
      - { name: minio-8 ,ip: 10.10.10.17 ,port: 9000 ,options: 'check-ssl ca-file /etc/pki/ca.crt check port 9000' }

Then Tun the haproxy subtask of the node.yml playbook to update the load balancer configuration:

./node.yml -l minio -t haproxy_config,haproxy_reload   # re-configure and reload haproxy service definition

If node L2 VIP is also used to ensure reliable load balancer access, you also need to add new nodes (if any) to the existing NODE VIP group:

./node.yml -l minio -t node_vip  # reload node l2 vip configuration

Shrink Cluster

MinIO cannot scale down at the node/disk level, but you can retire at the storage pool (multiple nodes) level —— Add a new storage pool, drain the old storage pool, migrate to the new storage pool, and then retire the old storage pool.


Upgrade Cluster

First, download the new version of the MinIO software package to the local software repository of the INFRA node:

and then rebuild the software repo with:

./infra.yml -t repo_create

You can upgrade all MinIO software packages with Ansible package module:

ansible minio -m package -b -a 'name=minio state=latest'  # upgrade MinIO server
ansible minio -m package -b -a 'name=mcli state=latest'   # upgrade mcli client

Finlly, notify the MinIO cluster to restart with the mc command line tool:

mc admin service restart sss

Node Failure Recovery

# 1. remove failure node
bin/node-rm <your_old_node_ip>

# 2. replace failure node with the same name (modify the inventory in case of IP change)
bin/node-add <your_new_node_ip>

# 3. provisioning MinIO on new node
./minio.yml -l <your_new_node_ip>

# 4. instruct MinIO to perform heal action
mc admin heal

Disk Failure Recovery

# 1. umount failure disk
umount /dev/<your_disk_device>

# 2. replace with new driver, format with xfs
mkfs.xfs /dev/sdb -L DRIVE1

# 3. don't forget to setup fstab for auto-mount
vi /etc/fstab
# LABEL=DRIVE1     /mnt/drive1    xfs     defaults,noatime  0       2

# 4. remount the new disk
mount -a

# 5. instruct MinIO to perform heal action
mc admin heal

6 - Monitoring

MinIO monitoring metrics, dashboards, and alerting rules

Dashboard

There is one dashboard for MINIO module.

MinIO Overview: Overview of one single MinIO cluster

minio-overview.jpg


Alert Rules

There are 3 predefined alert rules for MinIO, defined in files/prometheus/rules/minio.yml

  • MinioServerDown
  • MinioNodeOffline
  • MinioDiskOffline
#==============================================================#
#                         Aliveness                            #
#==============================================================#
# MinIO server instance down
- alert: MinioServerDown
  expr: minio_up < 1
  for: 1m
  labels: { level: 0, severity: CRIT, category: minio }
  annotations:
    summary: "CRIT MinioServerDown {{ $labels.ins }}@{{ $labels.instance }}"
    description: |
      minio_up[ins={{ $labels.ins }}, instance={{ $labels.instance }}] = {{ $value }} < 1
      http://g.pigsty/d/minio-overview      

#==============================================================#
#                         Error                                #
#==============================================================#
# MinIO node offline triggers a p1 alert
- alert: MinioNodeOffline
  expr: avg_over_time(minio_cluster_nodes_offline_total{job="minio"}[5m]) > 0
  for: 3m
  labels: { level: 1, severity: WARN, category: minio }
  annotations:
    summary: "WARN MinioNodeOffline: {{ $labels.cls }} {{ $value }}"
    description: |
      minio_cluster_nodes_offline_total[cls={{ $labels.cls }}] = {{ $value }} > 0
      http://g.pigsty/d/minio-overview?from=now-5m&to=now&var-cls={{$labels.cls}}      

# MinIO disk offline triggers a p1 alert
- alert: MinioDiskOffline
  expr: avg_over_time(minio_cluster_disk_offline_total{job="minio"}[5m]) > 0
  for: 3m
  labels: { level: 1, severity: WARN, category: minio }
  annotations:
    summary: "WARN MinioDiskOffline: {{ $labels.cls }} {{ $value }}"
    description: |
      minio_cluster_disk_offline_total[cls={{ $labels.cls }}] = {{ $value }} > 0
      http://g.pigsty/d/minio-overview?from=now-5m&to=now&var-cls={{$labels.cls}}      

7 - Metrics

Pigsty MINIO module metric list

MINIO module has 79 available metrics

Metric Name Type Labels Description
minio_audit_failed_messages counter ip, job, target_id, cls, instance, server, ins Total number of messages that failed to send since start
minio_audit_target_queue_length gauge ip, job, target_id, cls, instance, server, ins Number of unsent messages in queue for target
minio_audit_total_messages counter ip, job, target_id, cls, instance, server, ins Total number of messages sent since start
minio_cluster_bucket_total gauge ip, job, cls, instance, server, ins Total number of buckets in the cluster
minio_cluster_capacity_raw_free_bytes gauge ip, job, cls, instance, server, ins Total free capacity online in the cluster
minio_cluster_capacity_raw_total_bytes gauge ip, job, cls, instance, server, ins Total capacity online in the cluster
minio_cluster_capacity_usable_free_bytes gauge ip, job, cls, instance, server, ins Total free usable capacity online in the cluster
minio_cluster_capacity_usable_total_bytes gauge ip, job, cls, instance, server, ins Total usable capacity online in the cluster
minio_cluster_drive_offline_total gauge ip, job, cls, instance, server, ins Total drives offline in this cluster
minio_cluster_drive_online_total gauge ip, job, cls, instance, server, ins Total drives online in this cluster
minio_cluster_drive_total gauge ip, job, cls, instance, server, ins Total drives in this cluster
minio_cluster_health_erasure_set_healing_drives gauge pool, ip, job, cls, set, instance, server, ins Get the count of healing drives of this erasure set
minio_cluster_health_erasure_set_online_drives gauge pool, ip, job, cls, set, instance, server, ins Get the count of the online drives in this erasure set
minio_cluster_health_erasure_set_read_quorum gauge pool, ip, job, cls, set, instance, server, ins Get the read quorum for this erasure set
minio_cluster_health_erasure_set_status gauge pool, ip, job, cls, set, instance, server, ins Get current health status for this erasure set
minio_cluster_health_erasure_set_write_quorum gauge pool, ip, job, cls, set, instance, server, ins Get the write quorum for this erasure set
minio_cluster_health_status gauge ip, job, cls, instance, server, ins Get current cluster health status
minio_cluster_nodes_offline_total gauge ip, job, cls, instance, server, ins Total number of MinIO nodes offline
minio_cluster_nodes_online_total gauge ip, job, cls, instance, server, ins Total number of MinIO nodes online
minio_cluster_objects_size_distribution gauge ip, range, job, cls, instance, server, ins Distribution of object sizes across a cluster
minio_cluster_objects_version_distribution gauge ip, range, job, cls, instance, server, ins Distribution of object versions across a cluster
minio_cluster_usage_deletemarker_total gauge ip, job, cls, instance, server, ins Total number of delete markers in a cluster
minio_cluster_usage_object_total gauge ip, job, cls, instance, server, ins Total number of objects in a cluster
minio_cluster_usage_total_bytes gauge ip, job, cls, instance, server, ins Total cluster usage in bytes
minio_cluster_usage_version_total gauge ip, job, cls, instance, server, ins Total number of versions (includes delete marker) in a cluster
minio_cluster_webhook_failed_messages counter ip, job, cls, instance, server, ins Number of messages that failed to send
minio_cluster_webhook_online gauge ip, job, cls, instance, server, ins Is the webhook online?
minio_cluster_webhook_queue_length counter ip, job, cls, instance, server, ins Webhook queue length
minio_cluster_webhook_total_messages counter ip, job, cls, instance, server, ins Total number of messages sent to this target
minio_cluster_write_quorum gauge ip, job, cls, instance, server, ins Maximum write quorum across all pools and sets
minio_node_file_descriptor_limit_total gauge ip, job, cls, instance, server, ins Limit on total number of open file descriptors for the MinIO Server process
minio_node_file_descriptor_open_total gauge ip, job, cls, instance, server, ins Total number of open file descriptors by the MinIO Server process
minio_node_go_routine_total gauge ip, job, cls, instance, server, ins Total number of go routines running
minio_node_ilm_expiry_pending_tasks gauge ip, job, cls, instance, server, ins Number of pending ILM expiry tasks in the queue
minio_node_ilm_transition_active_tasks gauge ip, job, cls, instance, server, ins Number of active ILM transition tasks
minio_node_ilm_transition_missed_immediate_tasks gauge ip, job, cls, instance, server, ins Number of missed immediate ILM transition tasks
minio_node_ilm_transition_pending_tasks gauge ip, job, cls, instance, server, ins Number of pending ILM transition tasks in the queue
minio_node_ilm_versions_scanned counter ip, job, cls, instance, server, ins Total number of object versions checked for ilm actions since server start
minio_node_io_rchar_bytes counter ip, job, cls, instance, server, ins Total bytes read by the process from the underlying storage system including cache, /proc/[pid]/io rchar
minio_node_io_read_bytes counter ip, job, cls, instance, server, ins Total bytes read by the process from the underlying storage system, /proc/[pid]/io read_bytes
minio_node_io_wchar_bytes counter ip, job, cls, instance, server, ins Total bytes written by the process to the underlying storage system including page cache, /proc/[pid]/io wchar
minio_node_io_write_bytes counter ip, job, cls, instance, server, ins Total bytes written by the process to the underlying storage system, /proc/[pid]/io write_bytes
minio_node_process_cpu_total_seconds counter ip, job, cls, instance, server, ins Total user and system CPU time spent in seconds
minio_node_process_resident_memory_bytes gauge ip, job, cls, instance, server, ins Resident memory size in bytes
minio_node_process_starttime_seconds gauge ip, job, cls, instance, server, ins Start time for MinIO process per node, time in seconds since Unix epoc
minio_node_process_uptime_seconds gauge ip, job, cls, instance, server, ins Uptime for MinIO process per node in seconds
minio_node_scanner_bucket_scans_finished counter ip, job, cls, instance, server, ins Total number of bucket scans finished since server start
minio_node_scanner_bucket_scans_started counter ip, job, cls, instance, server, ins Total number of bucket scans started since server start
minio_node_scanner_directories_scanned counter ip, job, cls, instance, server, ins Total number of directories scanned since server start
minio_node_scanner_objects_scanned counter ip, job, cls, instance, server, ins Total number of unique objects scanned since server start
minio_node_scanner_versions_scanned counter ip, job, cls, instance, server, ins Total number of object versions scanned since server start
minio_node_syscall_read_total counter ip, job, cls, instance, server, ins Total read SysCalls to the kernel. /proc/[pid]/io syscr
minio_node_syscall_write_total counter ip, job, cls, instance, server, ins Total write SysCalls to the kernel. /proc/[pid]/io syscw
minio_notify_current_send_in_progress gauge ip, job, cls, instance, server, ins Number of concurrent async Send calls active to all targets (deprecated, please use ‘minio_notify_target_current_send_in_progress’ instead)
minio_notify_events_errors_total counter ip, job, cls, instance, server, ins Events that were failed to be sent to the targets (deprecated, please use ‘minio_notify_target_failed_events’ instead)
minio_notify_events_sent_total counter ip, job, cls, instance, server, ins Total number of events sent to the targets (deprecated, please use ‘minio_notify_target_total_events’ instead)
minio_notify_events_skipped_total counter ip, job, cls, instance, server, ins Events that were skipped to be sent to the targets due to the in-memory queue being full
minio_s3_requests_4xx_errors_total counter ip, job, cls, instance, server, ins, api Total number of S3 requests with (4xx) errors
minio_s3_requests_errors_total counter ip, job, cls, instance, server, ins, api Total number of S3 requests with (4xx and 5xx) errors
minio_s3_requests_incoming_total gauge ip, job, cls, instance, server, ins Total number of incoming S3 requests
minio_s3_requests_inflight_total gauge ip, job, cls, instance, server, ins, api Total number of S3 requests currently in flight
minio_s3_requests_rejected_auth_total counter ip, job, cls, instance, server, ins Total number of S3 requests rejected for auth failure
minio_s3_requests_rejected_header_total counter ip, job, cls, instance, server, ins Total number of S3 requests rejected for invalid header
minio_s3_requests_rejected_invalid_total counter ip, job, cls, instance, server, ins Total number of invalid S3 requests
minio_s3_requests_rejected_timestamp_total counter ip, job, cls, instance, server, ins Total number of S3 requests rejected for invalid timestamp
minio_s3_requests_total counter ip, job, cls, instance, server, ins, api Total number of S3 requests
minio_s3_requests_ttfb_seconds_distribution gauge ip, job, cls, le, instance, server, ins, api Distribution of time to first byte across API calls
minio_s3_requests_waiting_total gauge ip, job, cls, instance, server, ins Total number of S3 requests in the waiting queue
minio_s3_traffic_received_bytes counter ip, job, cls, instance, server, ins Total number of s3 bytes received
minio_s3_traffic_sent_bytes counter ip, job, cls, instance, server, ins Total number of s3 bytes sent
minio_software_commit_info gauge ip, job, cls, instance, commit, server, ins Git commit hash for the MinIO release
minio_software_version_info gauge ip, job, cls, instance, version, server, ins MinIO Release tag for the server
minio_up Unknown ip, job, cls, instance, ins N/A
minio_usage_last_activity_nano_seconds gauge ip, job, cls, instance, server, ins Time elapsed (in nano seconds) since last scan activity.
scrape_duration_seconds Unknown ip, job, cls, instance, ins N/A
scrape_samples_post_metric_relabeling Unknown ip, job, cls, instance, ins N/A
scrape_samples_scraped Unknown ip, job, cls, instance, ins N/A
scrape_series_added Unknown ip, job, cls, instance, ins N/A
up Unknown ip, job, cls, instance, ins N/A

8 - FAQ

Pigsty MINIO module frequently asked questions

Fail to launch multi-node / multi-driver MinIO cluster.

In Multi-Driver or Multi-Node mode, MinIO will refuse to start if the data dir is not a valid mount point.

Use mounted disks for MinIO data dir rather than some regular directory. You can use the regular directory only in the single node, single drive mode.


How to deploy a multi-node multi-drive MinIO cluster?

Check Create Multi-Node Multi-Driver MinIO Cluster


How to add a member to the existing MinIO cluster?

You’d better plan the MinIO cluster before deployment… Since this requires a global restart

Check this: Expand MinIO Deployment


How to use a HA MinIO deployment for PGSQL?

Access the HA MinIO cluster with an optional load balancer and different ports.

Here is an example: Access MinIO Service