When creating a node, you need to configure data disks for the node.
The data disk is divided into Kubernetes space and user space. The user space defines the space that is not allocated to Kubernetes in the local disk. The Kubernetes space consists of the following two parts:
The Docker space size affects image download and container startup and running. This section describes how the Docker space is used so that you can configure the Docker space accordingly.
By default, a data disk, 100 GB for example, is divided as follows (depending on the container storage Rootfs):
The thin pool is dynamically mounted. You can view it by running the lsblk command on a node, but not the df -h command.
Using rootfs for container storage in CCE
You can log in to the node and run the docker info command to view the storage engine type.
# docker info
Containers: 20
Running: 17
Paused: 0
Stopped: 3
Images: 16
Server Version: 18.09.0
Storage Driver: devicemapper
The number of pods and the space configured for each container determine whether the Docker space of a node is sufficient.
The Docker space should be greater than the total disk space used by containers. Formula: Docker space > Number of containers x Available data space for a single container (basesize)
When device mapper is used, although you can limit the size of the /home directory of a single container (to 10 GB by default), all containers on the node still share the thin pool of the node for storage. They are not completely isolated. When the sum of the thin pool space used by certain containers reaches the upper limit, other containers cannot run properly.
In addition, after a file is deleted in the /home directory of the container, the thin pool space occupied by the file is not released immediately. Therefore, even if basesize is set to 10 GB, the thin pool space occupied by files keeps increasing until 10 GB when files are created in the container. The space released after file deletion will be reused but after a while. If the number of containers on the node multiplied by basesize is greater than the thin pool space size of the node, there is a possibility that the thin pool space has been used up.
When the Docker space is insufficient, image garbage collection is triggered.
The policy for garbage collecting images takes two factors into consideration: HighThresholdPercent and LowThresholdPercent. Disk usage above the high threshold (default: 85%) will trigger garbage collection. The garbage collection will delete least recently used images until the low threshold (default: 80%) has been met.