When you use CCE to create a Kubernetes cluster, there are multiple configuration options and terms. This sections compares the key configurations for CCE clusters and provides recommendations to help you create a cluster that better suits your needs.
CCE supports CCE Turbo clusters and CCE standard clusters to meet your requirements. This section describes the differences between these two types of clusters.
Category |
Subcategory |
CCE Turbo Cluster |
CCE Standard Cluster |
---|---|---|---|
Cluster |
Positioning |
Next-gen container cluster designed for Cloud Native 2.0, with accelerated computing, networking, and scheduling |
Standard cluster for common commercial use |
Node type |
Deployment of VMs |
Deployment of VMs and bare-metal servers |
|
Networking |
Model |
Cloud Native Network 2.0: applies to large-scale and high-performance scenarios. Max networking scale: 2,000 nodes |
Cloud Native Network 1.0: applies to common, smaller-scale scenarios.
|
Performance |
Flattens the VPC network and container network into one, achieving zero performance loss. |
Overlays the VPC network with the container network, causing certain performance loss. |
|
Container network isolation |
Associates pods with security groups. Unifies security isolation in and out the cluster via security groups' network policies. |
|
|
Security |
Isolation |
|
Runs common containers, isolated by cgroups. |
Due to the fast iteration, many bugs are fixed and new features are added in the new Kubernetes versions. The old versions will be gradually eliminated. When creating a cluster, select the latest commercial version supported by CCE.
This section describes the network models supported by CCE clusters. You can select one model based on your requirements.
After clusters are created, the network models cannot be changed. Exercise caution when selecting the network models.
Dimension |
Tunnel Network |
VPC Network |
Cloud Native Network 2.0 |
---|---|---|---|
Application scenarios |
|
|
|
Core technology |
OVS |
IPvlan and VPC route |
VPC ENI/sub-ENI |
Applicable clusters |
CCE standard cluster |
CCE standard cluster |
CCE Turbo cluster |
Network isolation |
Kubernetes native NetworkPolicy for pods |
No |
Pods support security group isolation. |
Passthrough networking |
No |
No |
Yes |
IP address management |
|
|
The container CIDR block is divided from the VPC subnet and does not need to be allocated separately. |
Network performance |
Performance loss due to VXLAN encapsulation |
No tunnel encapsulation. Cross-node packets are forwarded through VPC routers, delivering performance equivalent to that of the host network. |
The container network is integrated with the VPC network, eliminating performance loss. |
Networking scale |
A maximum of 2000 nodes are supported. |
Suitable for small- and medium-scale networks due to the limitation on VPC routing tables. It is recommended that the number of nodes be less than or equal to 1000. Each time a node is added to the cluster, a route is added to the VPC routing tables. Therefore, the cluster scale is limited by the VPC routing tables. |
A maximum of 2000 nodes are supported. |
There are node CIDR blocks, container CIDR blocks, and Service CIDR blocks in CCE clusters. When planning network addresses, note that:
In complex scenarios, for example, multiple clusters use the same VPC or clusters are interconnected across VPCs, determine the number of VPCs, the number of subnets, the container CIDR blocks, and the communication modes of the Service CIDR blocks. For details, see Planning CIDR Blocks for a Cluster.
kube-proxy is a key component of a Kubernetes cluster. It is responsible for load balancing and forwarding between a Service and its backend pod.
CCE supports the iptables and IPVS forwarding modes.
If high stability is required and the number of Services is less than 2000, the iptables forwarding mode is recommended. In other scenarios, the IPVS forwarding mode is recommended.
Additionally, select a proper vCPU/memory ratio based on your requirements. For example, if a service container with large memory but fewer CPUs is used, configure the specifications with the vCPU/memory ratio of 1:4 for the node where the container resides to reduce resource waste.
CCE supports the containerd and Docker container engines. containerd is recommended for its shorter traces, fewer components, higher stability, and less consumption of node resources. Since Kubernetes 1.24, Dockershim is removed and Docker is no longer supported by default. For details, see Kubernetes is Moving on From Dockershim: Commitments and Next Steps. CCE clusters 1.27 do not support the Docker container engine.
Use containerd in typical scenarios. The Docker container engine is supported only in the following scenarios:
Service container runtimes share the kernel and underlying calls of nodes. To ensure compatibility, select a Linux distribution version that is the same as or close to that of the final service container image for the node OS.