This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Module: DOCKER

Docker Daemon services, which allows you to pull-up stateless software in additional to PostgreSQL.

Deploy docker on Pigsty managed nodes: Configuration | Administration | Playbook | Dashboard | Parameter


Concept

Docker is a popular container service which provides standardize software deliver solution.


Configuration

Docker module is different from other modules, which does not require pre-configuration to install & enable.

Just run the docker.yml playbook on any Pigsty managed node.

But if you wish to add docker daemon as prometheus monitoring target, you have to specify the docker_enabled parameters on those nodes.


Administration

Using Mirrors

Consider using docker mirror registry, log in with:

docker login quay.io    # entering your username & password

Monitoring

Docker monitoring is part ot NODE module’s responsibility, to register docker target to prometheus.

You have to define docker_enabled on nodes, then re-register them with:

./node.yml -l <selector> -t register_prometheus

Compose Template

Pigsty has a series of built-in docker compose templates.


Parameter

There are 4 parameters about DOCKER module.

Parameter Section Type Level Comment
docker_enabled DOCKER bool C enable docker on this node?
docker_cgroups_driver DOCKER enum C docker cgroup fs driver: cgroupfs,systemd
docker_registry_mirrors DOCKER string[] C docker registry mirror list
docker_image_cache DOCKER path C docker image cache dir, /tmp/docker by default

1 - Usage

Get started with Docker in pigsty: install, uninstall, download, mirror, proxy, images and more.

Pigsty includes built-in Docker support, allowing you to quickly deploy containerized applications.

Getting Started

Docker is an optional module and is not enabled by default in most Pigsty configuration templates. Users must explicitly download and configure Docker to use it within Pigsty.

For instance, in the default meta template, Docker is not installed. However, the rich single-node template will download and install Docker.

The key differences between these configurations are the repo_modules and repo_packages parameters:

repo_modules: infra,node,pgsql,docker  # <--- Enable Docker repo
repo_packages:
  - node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common, docker   # <--- Download Docker

After downloading Docker, enable it on target nodes using the **docker_enabled**: true setting, and configure other parameters as needed:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1, nodename: infra-1 }
    10.10.10.11: { infra_seq: 2, nodename: infra-2 }
  vars:
    docker_enabled: true  # Install Docker on this group

Finally, use the docker.yml playbook to install:

./docker.yml -l infra    # Install Docker on the infra group

Installation

To temporarily install Docker directly from the internet on selected nodes:

./node.yml -e '{"node_repo_modules":"node,docker","node_packages":["docker-ce,docker-compose-plugin"]}' -t node_repo,node_pkg -l <select_group_ip>

This enables the required repos (node,docker) and installs packages docker-ce and docker-compose-plugin.

For automatic Docker downloads during Pigsty initialization, see below.

Uninstallation

Pigsty does not provide a dedicated Docker uninstall playbook due to simplicity. Uninstall Docker with Ansible directly:

ansible minio -m package -b -a 'name=docker-ce state=absent'

Download

To download Docker during Pigsty installation, enable Docker repositories by modifying repo_modules and specifying Docker packages in repo_packages or repo_extra_packages:

repo_modules: infra,node,pgsql,docker  # Enable Docker repo
repo_packages:
  - node-bootstrap, infra-package, infra-addons, node-package1, node-package2, pgsql-common, docker
repo_extra_packages:
  - pgsql-main docker

Packages defined here (docker-ce, docker-compose-plugin) are automatically downloaded during the default install.yml run, becoming available via local repositories.

After Pigsty installation, update repositories by running ./infra.yml -t repo_build.

Docker installation requires enabling the Docker module in repo_modules.

Repository

Docker requires external repositories, pre-configured under repo_upstream with the module name docker:

- { name: docker-ce, description: 'Docker CE', module: docker, releases: [7,8,9], arch: [x86_64, aarch64], baseurl: { default: 'https://download.docker.com/linux/centos/$releasever/$basearch/stable', china: 'https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable', europe: 'https://mirrors.xtom.de/docker-ce/linux/centos/$releasever/$basearch/stable' }}
- { name: docker-ce, description: 'Docker CE', module: docker, releases: [11,12,20,22,24], arch: [x86_64, aarch64], baseurl: { default: 'https://download.docker.com/linux/${distro_name} ${distro_codename} stable', china: 'https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/${distro_name} ${distro_codename} stable' }}

Note: The official Docker repo is blocked by default in Mainland China. Use regional mirrors to resolve this.

Proxy

Configure network proxies using the proxy_env parameter in Pigsty:

proxy_env:
  no_proxy: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,*.pigsty,*.aliyun.com,mirrors.*"
  http_proxy: 'http://127.0.0.1:12345'
  https_proxy: 'http://127.0.0.1:12345'
  all_proxy: 'http://127.0.0.1:12345'

Use curl to verify proxy effectiveness. Avoid combining proxy servers with Mainland China mirrors.

Mirror Sites

Specify Docker Registry Mirrors using docker_registry_mirrors:

Example mirrors:

  • Alibaba Cloud:

    ["https://registry.cn-hangzhou.aliyuncs.com"]
    
  • Tencent Cloud:

    ["https://ccr.ccs.tencentyun.com"]
    

Pulling Images

Preload Docker images using docker_image and docker_image_cache:

infra:
  hosts:
    10.10.10.10: { infra_seq: 1 }
  vars:
    docker_enabled: true
    docker_image:
      - redis:latest

Or preload using local compressed images (tgz files):

- name: copy local docker images
  copy: src="{{ item }}" dest="/tmp/docker/"
  with_fileglob: "/tmp/supabase/*.tgz"

Applications

Pigsty provides ready-to-use software templates based on Docker Compose to deploy external applications seamlessly integrated with Pigsty-managed database clusters.




2 - Parameter

5 parameters to customize the Docker module as needed.

Parameters

There are 8 parameters for the Docker module:

Name Type Level Comment
docker_enabled bool G/C/I enable docker on this node?
docker_data path G/C/I Docker data directory, /var/lib/docker by default
docker_storage_driver enum G/C/I Docker storage driver, overlay2 by default
docker_cgroups_driver enum G/C/I docker cgroup fs driver: cgroupfs,systemd
docker_registry_mirrors string[] G/C/I docker registry mirror list
docker_exporter_port port G Docker metrics exporter port, 9323 by default
docker_image path[] G/C/I docker image to be pulled, [] by default
docker_image_cache path G/C/I docker image cache tarball glob, /tmp/docker by default

Defaults

Docker’s default parameters are defined in roles/docker/defaults/main.yml

docker_enabled: false             # enable docker on this node?
docker_data: /var/lib/docker      # docker data directory, /var/lib/docker by default
docker_storage_driver: overlay2   # docker storage driver, can be zfs, btrfs
docker_cgroups_driver: systemd    # docker cgroup fs driver: cgroupfs,systemd
docker_registry_mirrors: []       # docker registry mirror list
docker_exporter_port: 9323        # docker metrics exporter port, 9323 by default
docker_image: []                  # docker image to be pulled after bootstrap
docker_image_cache: /tmp/docker/*.tgz # docker image cache glob pattern

docker_enabled

name: docker_enabled, type: bool, level: G/C/I

enable docker on this node? default value is false


docker_data

name: docker_data, type: path, level: C

Docker data directory, /var/lib/docker by default.


docker_storage_driver

name: docker_storage_driver, type: enum, level: C

Docker storage driver, overlay2 by default.

Please refer to: https://docs.docker.com/engine/storage/drivers/select-storage-driver/

  • overlay2
  • fuse-overlayfs
  • brtfs
  • zfs
  • vfs

docker_cgroups_driver

name: docker_cgroups_driver, type: enum, level: G/C/I

docker cgroup fs driver, could be cgroupfs or systemd, default values: systemd


docker_registry_mirrors

name: docker_registry_mirrors, type: string[], level: G/C/I

docker registry mirror list, default values: [], Example:

Here are some examples of using the internal network image of each cloud vendor:

["https://docker.m.daocloud.io"]                # domestic DaoCloud image site
["https://docker.1ms.run"]                      # domestic millisecond image site
["https://mirror.ccs.tencentyun.com"]           # tencent cloud intranet image site
["https://registry.cn-hangzhou.aliyuncs.com"]   # aliyun cloud intranet image site, login required

Consider using Cloudflare Worker Docker Proxy

If the pull speed is too slow, you can also consider: docker login quay.io use other Registry.


docker_exporter_port

name: docker_exporter_port, type: port, level: G

Docker metrics exporter port, 9323 by default.


docker_image

name: docker_image, type: string[], level: G/C/I

docker image to be pulled, [] by default

Image listed here will be pulled during docker provisioning.


docker_image_cache

name: docker_image_cache, type: path, level: G/C/I

docker image cache tarball glob list, "/tmp/docker/*.tgz" by default.

The local docker image cache with .tgz suffix match this glob list will be loaded into docker one by one:

cat *.tgz | gzip -d -c - | docker load



3 - Playbook

How to manage docker with ansible/pigsty playbook

The DOCKER module has only one playbook: docker.yml to install docker daemon & docker compose on target node.


docker.yml

The raw playbook: docker.yml.

Run this playbook on any host will install docker-ce and docker-compose-plugin on target node with docker_enabled: true flag.

Here are the available subtasks in the docker.yml playbook:

  • docker_install: Install Docker and Docker Compose packages on the node.
  • docker_admin: Add specified users to the Docker administrator user group.
  • docker_config: Generate Docker daemon service configuration file.
  • docker_launch: Start the Docker daemon service.
  • docker_register: Register Docker daemon as a Prometheus monitoring target.
  • docker_image: Attempt to load prepackaged Docker images from /tmp/docker/*.tgz if present.

The Docker module does not provide a dedicated playbook for uninstalling Docker. If you need to uninstall Docker, you can manually stop the Docker service and then uninstall it:

systemctl stop docker                        # Stop Docker daemon service
yum remove docker-ce docker-compose-plugin   # Uninstall Docker on EL systems
apt remove docker-ce docker-compose-plugin   # Uninstall Docker on Debian systems



4 - Metrics

Pigsty Docker module metric list

DOCKER module has 123 available metrics

Metric Name Type Labels Description
builder_builds_failed_total counter ip, cls, reason, ins, job, instance Number of failed image builds
builder_builds_triggered_total counter ip, cls, ins, job, instance Number of triggered image builds
docker_up Unknown ip, cls, ins, job, instance N/A
engine_daemon_container_actions_seconds_bucket Unknown ip, cls, ins, job, instance, le, action N/A
engine_daemon_container_actions_seconds_count Unknown ip, cls, ins, job, instance, action N/A
engine_daemon_container_actions_seconds_sum Unknown ip, cls, ins, job, instance, action N/A
engine_daemon_container_states_containers gauge ip, cls, ins, job, instance, state The count of containers in various states
engine_daemon_engine_cpus_cpus gauge ip, cls, ins, job, instance The number of cpus that the host system of the engine has
engine_daemon_engine_info gauge ip, cls, architecture, ins, job, instance, os_version, kernel, version, graphdriver, os, daemon_id, commit, os_type The information related to the engine and the OS it is running on
engine_daemon_engine_memory_bytes gauge ip, cls, ins, job, instance The number of bytes of memory that the host system of the engine has
engine_daemon_events_subscribers_total gauge ip, cls, ins, job, instance The number of current subscribers to events
engine_daemon_events_total counter ip, cls, ins, job, instance The number of events logged
engine_daemon_health_checks_failed_total counter ip, cls, ins, job, instance The total number of failed health checks
engine_daemon_health_check_start_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
engine_daemon_health_check_start_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
engine_daemon_health_check_start_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
engine_daemon_health_checks_total counter ip, cls, ins, job, instance The total number of health checks
engine_daemon_host_info_functions_seconds_bucket Unknown ip, cls, ins, job, instance, le, function N/A
engine_daemon_host_info_functions_seconds_count Unknown ip, cls, ins, job, instance, function N/A
engine_daemon_host_info_functions_seconds_sum Unknown ip, cls, ins, job, instance, function N/A
engine_daemon_image_actions_seconds_bucket Unknown ip, cls, ins, job, instance, le, action N/A
engine_daemon_image_actions_seconds_count Unknown ip, cls, ins, job, instance, action N/A
engine_daemon_image_actions_seconds_sum Unknown ip, cls, ins, job, instance, action N/A
engine_daemon_network_actions_seconds_bucket Unknown ip, cls, ins, job, instance, le, action N/A
engine_daemon_network_actions_seconds_count Unknown ip, cls, ins, job, instance, action N/A
engine_daemon_network_actions_seconds_sum Unknown ip, cls, ins, job, instance, action N/A
etcd_debugging_snap_save_marshalling_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_debugging_snap_save_marshalling_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_debugging_snap_save_marshalling_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
etcd_debugging_snap_save_total_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_debugging_snap_save_total_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_debugging_snap_save_total_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
etcd_disk_wal_fsync_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_disk_wal_fsync_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_disk_wal_fsync_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
etcd_disk_wal_write_bytes_total gauge ip, cls, ins, job, instance Total number of bytes written in WAL.
etcd_snap_db_fsync_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_snap_db_fsync_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_snap_db_fsync_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
etcd_snap_db_save_total_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_snap_db_save_total_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_snap_db_save_total_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
etcd_snap_fsync_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
etcd_snap_fsync_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
etcd_snap_fsync_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
go_gc_duration_seconds summary ip, cls, ins, job, instance, quantile A summary of the pause duration of garbage collection cycles.
go_gc_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
go_gc_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
go_goroutines gauge ip, cls, ins, job, instance Number of goroutines that currently exist.
go_info gauge ip, cls, ins, job, version, instance Information about the Go environment.
go_memstats_alloc_bytes counter ip, cls, ins, job, instance Total number of bytes allocated, even if freed.
go_memstats_alloc_bytes_total counter ip, cls, ins, job, instance Total number of bytes allocated, even if freed.
go_memstats_buck_hash_sys_bytes gauge ip, cls, ins, job, instance Number of bytes used by the profiling bucket hash table.
go_memstats_frees_total counter ip, cls, ins, job, instance Total number of frees.
go_memstats_gc_sys_bytes gauge ip, cls, ins, job, instance Number of bytes used for garbage collection system metadata.
go_memstats_heap_alloc_bytes gauge ip, cls, ins, job, instance Number of heap bytes allocated and still in use.
go_memstats_heap_idle_bytes gauge ip, cls, ins, job, instance Number of heap bytes waiting to be used.
go_memstats_heap_inuse_bytes gauge ip, cls, ins, job, instance Number of heap bytes that are in use.
go_memstats_heap_objects gauge ip, cls, ins, job, instance Number of allocated objects.
go_memstats_heap_released_bytes gauge ip, cls, ins, job, instance Number of heap bytes released to OS.
go_memstats_heap_sys_bytes gauge ip, cls, ins, job, instance Number of heap bytes obtained from system.
go_memstats_last_gc_time_seconds gauge ip, cls, ins, job, instance Number of seconds since 1970 of last garbage collection.
go_memstats_lookups_total counter ip, cls, ins, job, instance Total number of pointer lookups.
go_memstats_mallocs_total counter ip, cls, ins, job, instance Total number of mallocs.
go_memstats_mcache_inuse_bytes gauge ip, cls, ins, job, instance Number of bytes in use by mcache structures.
go_memstats_mcache_sys_bytes gauge ip, cls, ins, job, instance Number of bytes used for mcache structures obtained from system.
go_memstats_mspan_inuse_bytes gauge ip, cls, ins, job, instance Number of bytes in use by mspan structures.
go_memstats_mspan_sys_bytes gauge ip, cls, ins, job, instance Number of bytes used for mspan structures obtained from system.
go_memstats_next_gc_bytes gauge ip, cls, ins, job, instance Number of heap bytes when next garbage collection will take place.
go_memstats_other_sys_bytes gauge ip, cls, ins, job, instance Number of bytes used for other system allocations.
go_memstats_stack_inuse_bytes gauge ip, cls, ins, job, instance Number of bytes in use by the stack allocator.
go_memstats_stack_sys_bytes gauge ip, cls, ins, job, instance Number of bytes obtained from system for stack allocator.
go_memstats_sys_bytes gauge ip, cls, ins, job, instance Number of bytes obtained from system.
go_threads gauge ip, cls, ins, job, instance Number of OS threads created.
logger_log_entries_size_greater_than_buffer_total counter ip, cls, ins, job, instance Number of log entries which are larger than the log buffer
logger_log_read_operations_failed_total counter ip, cls, ins, job, instance Number of log reads from container stdio that failed
logger_log_write_operations_failed_total counter ip, cls, ins, job, instance Number of log write operations that failed
process_cpu_seconds_total counter ip, cls, ins, job, instance Total user and system CPU time spent in seconds.
process_max_fds gauge ip, cls, ins, job, instance Maximum number of open file descriptors.
process_open_fds gauge ip, cls, ins, job, instance Number of open file descriptors.
process_resident_memory_bytes gauge ip, cls, ins, job, instance Resident memory size in bytes.
process_start_time_seconds gauge ip, cls, ins, job, instance Start time of the process since unix epoch in seconds.
process_virtual_memory_bytes gauge ip, cls, ins, job, instance Virtual memory size in bytes.
process_virtual_memory_max_bytes gauge ip, cls, ins, job, instance Maximum amount of virtual memory available in bytes.
promhttp_metric_handler_requests_in_flight gauge ip, cls, ins, job, instance Current number of scrapes being served.
promhttp_metric_handler_requests_total counter ip, cls, ins, job, instance, code Total number of scrapes by HTTP status code.
scrape_duration_seconds Unknown ip, cls, ins, job, instance N/A
scrape_samples_post_metric_relabeling Unknown ip, cls, ins, job, instance N/A
scrape_samples_scraped Unknown ip, cls, ins, job, instance N/A
scrape_series_added Unknown ip, cls, ins, job, instance N/A
swarm_dispatcher_scheduling_delay_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_dispatcher_scheduling_delay_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_dispatcher_scheduling_delay_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_manager_configs_total gauge ip, cls, ins, job, instance The number of configs in the cluster object store
swarm_manager_leader gauge ip, cls, ins, job, instance Indicates if this manager node is a leader
swarm_manager_networks_total gauge ip, cls, ins, job, instance The number of networks in the cluster object store
swarm_manager_nodes gauge ip, cls, ins, job, instance, state The number of nodes
swarm_manager_secrets_total gauge ip, cls, ins, job, instance The number of secrets in the cluster object store
swarm_manager_services_total gauge ip, cls, ins, job, instance The number of services in the cluster object store
swarm_manager_tasks_total gauge ip, cls, ins, job, instance, state The number of tasks in the cluster object store
swarm_node_manager gauge ip, cls, ins, job, instance Whether this node is a manager or not
swarm_raft_snapshot_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_raft_snapshot_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_raft_snapshot_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_raft_transaction_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_raft_transaction_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_raft_transaction_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_store_batch_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_store_batch_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_store_batch_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_store_lookup_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_store_lookup_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_store_lookup_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_store_memory_store_lock_duration_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_store_memory_store_lock_duration_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_store_memory_store_lock_duration_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_store_read_tx_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_store_read_tx_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_store_read_tx_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
swarm_store_write_tx_latency_seconds_bucket Unknown ip, cls, ins, job, instance, le N/A
swarm_store_write_tx_latency_seconds_count Unknown ip, cls, ins, job, instance N/A
swarm_store_write_tx_latency_seconds_sum Unknown ip, cls, ins, job, instance N/A
up Unknown ip, cls, ins, job, instance N/A

5 - FAQ

Pigsty Docker module frequently asked questions

How to install Docker ?

Install with the docker.yml playbook, targeting at any node managed by Pigsty.

./docker.yml -l <selector>