Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: proposalbot <proposalbot@otc-service.com> Co-committed-by: proposalbot <proposalbot@otc-service.com>
27 KiB
- original_name
cce_01_0180.html
Overview
Introduction
A container cluster consists of a set of worker machines, called nodes, that run containerized applications. A node can be a virtual machine (VM) or a physical machine (PM), depending on your service requirements. The components on a node include kubelet, container runtime, and kube-proxy.
Note
A Kubernetes cluster consists of master nodes and node nodes. The nodes described in this section refer to worker nodes, the computing nodes of a cluster that run containerized applications.
CCE uses high-performance Elastic Cloud Servers (ECSs) as nodes to build highly available Kubernetes clusters.
Notes
- To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and the amount of allocatable node resources for your cluster are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components.
- The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected.
Node Lifecycle
A lifecycle indicates the node statuses recorded from the time when the node is created through the time when the node is deleted or released.
Status | Status Attribute | Description |
---|---|---|
Available | Stable state | The node is running properly and is connected to the cluster. Nodes in this state can provide services. |
Unavailable | Stable state | The node is not running properly. Instances in this state no longer provide services. In this case, perform the operations in |
Creating | Intermediate state | The node has been created but is not running. |
Installing | Intermediate state | The Kubernetes software is being installed on the node. |
Deleting | Intermediate state | The node is being deleted. If this state stays for a long time, an exception occurs. |
Stopped | Stable state | The node is stopped properly. A node in this state cannot provide services. You can start the node on the ECS console. |
Error | Stable state | The node is abnormal. Instances in this state no longer provide services. In this case, perform the operations in |
Mapping between Node OSs and Container Engines
OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime |
---|---|---|---|---|
CentOS 7.x | 3.x | Docker | Clusters of v1.19 and earlier use Device Mapper. Clusters of v1.21 and later use OverlayFS. |
runC |
EulerOS 2.5 | Device Mapper |
Node Type | OS | Kernel Version | Container Engine | Container Storage Rootfs | Container Runtime |
---|---|---|---|---|---|
VM | centos 7.x | 3.x | Docker | OverlayFS | Runc |
BMS in the shared resource pool | EulerOS 2.9 | 4.x | containerd | Device Mapper | Kata |
Secure Containers and Common Containers
Secure (Kata) containers are distinguished from common containers in a few aspects.
The most significant difference is that each secure container (pod) runs on an independent micro-VM, has an independent OS kernel, and is securely isolated at the virtualization layer. CCE provides container isolation that is more secure than independent private Kubernetes clusters. With isolated OS kernels, computing resources, and networks, pod resources and data will not be preempted and stolen by other pods.
You can run common or secure containers on a single node in a CCE Turbo cluster. The differences between them are as follows:
Category | Secure Container (Kata) | Common Container (Docker) | Common Container (containerd) |
---|---|---|---|
Node type used to run containers | Bare-metal server (BMS) | VM | VM |
Container engine | containerd | Docker Default value for common containers created on the console. |
containerd |
Container runtime | Kata | runC | runC |
Container kernel | Exclusive kernel | Sharing the kernel with the host | Sharing the kernel with the host |
Container isolation | Lightweight VMs | cgroups and namespaces | cgroups and namespaces |
Container engine storage driver | Device Mapper | OverlayFS2 | OverlayFS |
Pod overhead | Memory: 50 MiB CPU: 0.1 cores Pod overhead is a feature for accounting for the resources consumed by the pod infrastructure on top of the container requests and limits. For example, if limits.cpu is set to 0.5 cores and limits.memory to 256 MiB for a pod, the pod will request 0.6-core CPUs and 306 MiB of memory. |
None | None |
Minimal specifications | Memory: 256 MiB CPU: 0.25 cores |
None | None |
Container engine CLI | crictl | docker | crictl |
Pod computing resources | The request and limit values must be the same for both CPU and memory. | The request and limit values can be different for both CPU and memory. | The request and limit values can be different for both CPU and memory. |
Host network | Not supported | Supported | Supported |