11 KiB

original_name

cce_10_0281.html

Overview

The container network assigns IP addresses to pods in a cluster and provides networking services. In CCE, you can select the following network models for your cluster:

  • Tunnel network <cce_10_0282>
  • VPC network <cce_10_0283>
  • Cloud Native Network 2.0 <cce_10_0284>

Network Model Comparison

Table 1 <cce_10_0281__en-us_topic_0146398798_table715802210336> describes the differences of network models supported by CCE.

Caution

After a cluster is created, the network model cannot be changed.

Table 1 Network model comparison
Dimension Tunnel Network VPC Network Cloud Native Network 2.0
Application scenarios
  • Common container service scenarios
  • Scenarios that do not have high requirements on network latency and bandwidth
  • Scenarios that have high requirements on network latency and bandwidth
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.
  • Scenarios that have high requirements on network latency, bandwidth, and performance
  • Containers can communicate with VMs using a microservice registration framework, such as Dubbo and CSE.
Core technology OVS IPvlan and VPC route VPC ENI/sub-ENI
Applicable clusters CCE cluster CCE cluster CCE Turbo cluster
Network isolation Kubernetes native NetworkPolicy for pods No Pods support security group isolation.
Passthrough networking No No Yes
IP address management
  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and can be dynamically allocated (CIDR blocks can be dynamically added after being allocated.)
  • The container CIDR block is allocated separately.
  • CIDR blocks are divided by node and statically allocated (the CIDR block cannot be changed after a node is created).
The container CIDR block is divided from the VPC subnet and does not need to be allocated separately.
Network performance Performance loss due to VXLAN encapsulation No tunnel encapsulation. Cross-node packets are forwarded through VPC routers, delivering performance equivalent to that of the host network. The container network is integrated with the VPC network, eliminating performance loss.
Networking scale A maximum of 2,000 nodes are supported.

By default, 200 nodes are supported.

Each time a node is added to the cluster, a route is added to the VPC route tables. Therefore, the cluster scale is limited by the VPC route tables.

A maximum of 2,000 nodes are supported.

Important

  1. The scale of a cluster that uses the VPC network model is limited by the custom routes of the VPC. Therefore, you need to estimate the number of required nodes before creating a cluster.
  2. The scale of a cluster that uses the Cloud Native Network 2.0 model depends on the size of the VPC subnet CIDR block selected for the network attachment definition. Before creating a cluster, evaluate the scale of your cluster.
  3. By default, VPC routing network supports direct communication between containers and hosts in the same VPC. If a peering connection policy is configured between the VPC and another VPC, the containers can directly communicate with hosts on the peer VPC. In addition, in hybrid networking scenarios such as Direct Connect and VPN, communication between containers and hosts on the peer end can also be achieved with proper planning.
  4. Do not change the mask of the primary CIDR block on the VPC after a cluster is created. Otherwise, the network will be abnormal.