proposalbot a70a2c8b2e Changes to cce_umn from docs/doc-exports#418 (CCE UMN for 1.23 reuploaded -20221
Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com>
Co-authored-by: proposalbot <proposalbot@otc-service.com>
Co-committed-by: proposalbot <proposalbot@otc-service.com>
2022-11-10 18:57:33 +00:00

11 KiB

original_name

cce_01_0281.html

Overview

The container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:

  • Container tunnel network
  • VPC network
  • Cloud Native Network 2.0

Network Model Comparison

Table 1 <cce_01_0281__en-us_topic_0146398798_table715802210336> describes the differences of network models supported by CCE.

Caution

After a cluster is created, the network model cannot be changed.

Table 1 Network model comparison
Dimension Tunnel Network VPC Network Cloud Native Network 2.0
Core technology OVS IPvlan and VPC route VPC ENI/sub-ENI
Applicable clusters CCE cluster CCE cluster CCE Turbo cluster
Network isolation Yes. For details, see Network Policies <cce_01_0059>. No Yes. For details, see SecurityGroups <cce_01_0288>.
Passthrough networking No No Yes
IP address management
  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and can be dynamically allocated (CIDR blocks can be dynamically added after being allocated.)
  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and statically allocated (the CIDR block cannot be changed after a node is created).
The container CIDR block is divided from the VPC subnet and does not need to be allocated separately.
Network performance Performance loss due to VXLAN encapsulation No tunnel encapsulation. Cross-node packets are forwarded through VPC routers, delivering performance equivalent to that of the host network. The container network is integrated with the VPC network, eliminating performance loss.
Networking scale A maximum of 2,000 nodes are supported.

By default, 200 nodes are supported.

Each time a node is added to the cluster, a route is added to the VPC routing table. Therefore, the cluster scale is limited by the VPC route table.

A maximum of 2,000 nodes are supported.
Application scenarios
  • Common container service scenarios
  • Scenarios that do not have high requirements on network latency and bandwidth
  • Scenarios that have high requirements on network latency and bandwidth
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.
  • Scenarios that have high requirements on network latency, bandwidth, and performance
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.

Important

  1. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, you need to estimate the number of required nodes before creating a cluster.
  2. The scale of a cluster that uses the Cloud Native Network 2.0 model depends on the size of the VPC subnet CIDR block selected for the network attachment definition. Before creating a cluster, evaluate the scale of your cluster.
  3. By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning.