Reviewed-by: Hasko, Vladimir <vladimir.hasko@t-systems.com> Co-authored-by: proposalbot <proposalbot@otc-service.com> Co-committed-by: proposalbot <proposalbot@otc-service.com>
36 KiB
- original_name
cce_01_0298.html
Creating a CCE Turbo Cluster
CCE Turbo clusters run on a cloud native infrastructure that features software-hardware synergy to support passthrough networking, high security and reliability, and intelligent scheduling.
CCE Turbo clusters are paired with the Cloud Native Network 2.0 model for large-scale, high-performance container deployment. Containers are assigned IP addresses from the VPC CIDR block. Containers and nodes can belong to different subnets. Access requests from external networks in a VPC can be directly routed to container IP addresses, which greatly improves networking performance. It is recommended that you go through Cloud Native Network 2.0 <cce_01_0284>
to understand the features and network planning of each CIDR block of Cloud Native Network 2.0.
Notes and Constraints
- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
- You can create a maximum of 50 clusters in a single region.
- CCE Turbo clusters support only Cloud Native Network 2.0. For details about this network model, see
Cloud Native Network 2.0 <cce_01_0284>
. - Nodes in a CCE Turbo cluster must be the models developed on the QingTian architecture that features software-hardware synergy.
- CCE Turbo clusters are available only in certain regions.
Procedure
Log in to the CCE console. In the navigation pane, choose Resource Management > Clusters. Click Create next to CCE Turbo Cluster.
Figure 1 Creating a CCE Turbo cluster On the page displayed, set the following parameters:
Basic configuration
Specify the basic cluster configuration.
Table 1 Basic parameters for creating a cluster Parameter Description Cluster Name Name of the cluster to be created. The cluster name must be unique under the same account and cannot be changed after the cluster is created.
A cluster name contains 4 to 128 characters, starting with a letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed.
Version Version of Kubernetes to use for the cluster. Management Scale Maximum number of worker nodes that can be managed by the master nodes of the cluster. You can select 200 nodes, 1,000 nodes, or 2,000 nodes for your cluster.
Master node specifications change with the cluster management scale you choose, and you will be charged accordingly.
Networking configuration
Select the CIDR blocks used by nodes and containers in the cluster. If IP resources in the CIDR blocks are insufficient, nodes and containers cannot be created.
Table 2 Networking parameters Parameter Description Network Model Cloud Native Network 2.0: This network model deeply integrates the native elastic network interfaces (ENIs) of VPC, uses the VPC CIDR block to allocate container addresses, and supports direct traffic distribution to containers through a load balancer to deliver high performance. VPC Select the VPC used by nodes and containers in the cluster. The VPC cannot be changed after the cluster is created.
A VPC provides a secure and logically isolated network environment.
If no VPC is available, create one on the VPC console. After the VPC is created, click the refresh icon.
Node Subnet This parameter is available after you select a VPC.
The subnet you select is used by nodes in the cluster and determines the maximum number of nodes in the cluster. This subnet will be the default subnet where your nodes are created. When creating a node, you can select other subnets in the same VPC.
A node subnet provides dedicated network resources that are logically isolated from other networks for higher security.
If no node subnet is available, click Create Subnet to create a subnet. After the subnet is created, click the refresh icon. For details about the relationship between VPCs, subnets, and clusters, see
Cluster Overview <cce_01_0002>
.During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
The selected subnet cannot be changed after the cluster is created.
Pod Subnet This parameter is available after you select a VPC.
The subnet you select is used by pods in the cluster and determines the maximum number of pods in the cluster. The subnet cannot be changed after the cluster is created.
IP addresses used by pods will be allocated from this subnet.
Note
If the pod subnet is the same as the node subnet, pods and nodes share the remaining IP addresses in the subnet. As a result, pods or nodes may fail to be created due to insufficient IP addresses.
Advanced Settings
Configure enhanced capabilities for your CCE Turbo cluster.
Table 3 Networking parameters Parameter Description Service Network Segment An IP range from which IP addresses are allocated to Kubernetes Services. After the cluster is created, the CIDR block cannot be changed. The Service CIDR block cannot conflict with the created routes. If they conflict, select another CIDR block.
The default value is 10.247.0.0/16. You can change the CIDR block and mask according to your service requirements. The mask determines the maximum number of Service IP addresses available in the cluster.
After you set the mask, the console will provide an estimated maximum number of Services you can create in this CIDR block.
kube-proxy Mode Load balancing between Services and their backend pods. The value cannot be changed after the cluster is created.
IPVS: optimized kube-proxy mode to achieve higher throughput and faster speed, ideal for large-sized clusters. This mode supports incremental updates and can keep connections uninterrupted during Service updates.
In this mode, when the ingress and Service use the same ELB instance, the ingress cannot be accessed from the nodes and containers in the cluster.
iptables: Use iptables rules to implement Service load balancing. In this mode, too many iptables rules will be generated when many Services are deployed. In addition, non-incremental updates will cause a latency and even tangible performance issues in the case of service traffic spikes.
Note
- IPVS provides better scalability and performance for large clusters.
- Compared with iptables, IPVS supports more complex load balancing algorithms such as least load first (LLF) and weighted least connections (WLC).
- IPVS supports server health check and connection retries.
CPU Policy - On: Exclusive CPU cores can be allocated to workload pods. Select On if your workload is sensitive to latency in CPU cache and scheduling.
- Off: Exclusive CPU cores will not be allocated to workload pods. Select Off if you want a large pool of shareable CPU cores.
Click Next: Confirm to review the configurations and change them if required.
Click Submit.
It takes about 6 to 10 minutes to create a cluster. You can click Back to Cluster List to perform other operations on the cluster or click Go to Cluster Events to view the cluster details.
If the cluster status is Available, the CCE Turbo cluster is successfully created, and Turbo is displayed next to the cluster name.
Related Operations
Using kubectl to connect to the cluster:
Connecting to a Cluster Using kubectl <cce_01_0107>
Logging in to the node:
Logging In to a Node <cce_01_0185>
Creating a namespace: You can create multiple namespaces in a cluster and organize resources in the cluster into different namespaces. These namespaces serve as logical groups and can be managed separately. For details about how to create a namespace for a cluster, see
Namespaces <cce_01_0030>
.Creating a workload: Once the cluster is created, you can use an image to create an application that can be accessed from public networks. For details, see
Creating a Deployment <cce_01_0047>
,Creating a StatefulSet <cce_01_0048>
, orCreating a DaemonSet <cce_01_0216>
.Viewing cluster details: Click the cluster name to view cluster details.
Table 4 Details about the created cluster Tab Description Basic Information You can view the details and running status of the cluster. Monitoring You can view the CPU and memory allocation rates of all nodes in the cluster (that is, the maximum allocated amount), as well as the CPU usage, memory usage, and specifications of the master node(s). Events - View cluster events.
- Set search criteria, such as the event name or the time segment during which an event is generated, to filter events.
Auto Scaling You can configure auto scaling to add or reduce worker nodes in a cluster to meet service requirements. For details, see
Setting Cluster Auto Scaling <cce_01_0157>
.Clusters of v1.17 do not support auto scaling using AOM. You can use node pools for auto scaling. For details, see
Node Pool Overview <cce_01_0081>
.kubectl To access a Kubernetes cluster from a PC, you need to use the Kubernetes command line tool kubectl. For details, see Connecting to a Cluster Using kubectl <cce_01_0107>
.Resource Tags Resource tags can be added to classify resources.
You can create predefined tags in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and resource migration efficiency.
CCE will automatically create the "CCE-Dynamic-Provisioning-Node=Node ID" tag. A maximum of 5 tags can be added.