:original_name: cce_01_0298.html .. _cce_01_0298: Creating a CCE Turbo Cluster ============================ CCE Turbo clusters run on a cloud native infrastructure that features software-hardware synergy to support passthrough networking, high security and reliability, and intelligent scheduling. CCE Turbo clusters are paired with the Cloud Native Network 2.0 model for large-scale, high-performance container deployment. Containers are assigned IP addresses from the VPC CIDR block. Containers and nodes can belong to different subnets. Access requests from external networks in a VPC can be directly routed to container IP addresses, which greatly improves networking performance. **It is recommended** that you go through :ref:`Cloud Native Network 2.0 ` to understand the features and network planning of each CIDR block of Cloud Native Network 2.0. Notes and Constraints --------------------- - During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. - You can create a maximum of 50 clusters in a single region. - CCE Turbo clusters support only Cloud Native Network 2.0. For details about this network model, see :ref:`Cloud Native Network 2.0 `. - Nodes in a CCE Turbo cluster must be the models developed on the QingTian architecture that features software-hardware synergy. - CCE Turbo clusters are available only in certain regions. Procedure --------- #. Log in to the CCE console. In the navigation pane, choose **Resource Management** > **Clusters**. Click **Create** next to **CCE Turbo Cluster**. .. figure:: /_static/images/en-us_image_0000001150420952.png :alt: **Figure 1** Creating a CCE Turbo cluster **Figure 1** Creating a CCE Turbo cluster #. On the page displayed, set the following parameters: **Basic configuration** Specify the basic cluster configuration. .. table:: **Table 1** Basic parameters for creating a cluster +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+====================================================================================================================================================================+ | Cluster Name | Name of the cluster to be created. The cluster name must be unique under the same account and cannot be changed after the cluster is created. | | | | | | A cluster name contains 4 to 128 characters, starting with a letter and not ending with a hyphen (-). Only lowercase letters, digits, and hyphens (-) are allowed. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Version | Version of Kubernetes to use for the cluster. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Management Scale | Maximum number of worker nodes that can be managed by the master nodes of the cluster. You can select 200 nodes, 1,000 nodes, or 2,000 nodes for your cluster. | | | | | | Master node specifications change with the cluster management scale you choose, and you will be charged accordingly. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Networking configuration** Select the CIDR blocks used by nodes and containers in the cluster. If IP resources in the CIDR blocks are insufficient, nodes and containers cannot be created. .. table:: **Table 2** Networking parameters +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+==========================================================================================================================================================================================================================================================================================================================================================================================================================+ | Network Model | **Cloud Native Network 2.0**: This network model deeply integrates the native elastic network interfaces (ENIs) of VPC, uses the VPC CIDR block to allocate container addresses, and supports direct traffic distribution to containers through a load balancer to deliver high performance. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | VPC | Select the VPC used by nodes and containers in the cluster. The VPC cannot be changed after the cluster is created. | | | | | | A VPC provides a secure and logically isolated network environment. | | | | | | If no VPC is available, create one on the **VPC console**. After the VPC is created, click the refresh icon. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node Subnet | This parameter is available after you select a VPC. | | | | | | The subnet you select is used by nodes in the cluster and determines the maximum number of nodes in the cluster. This subnet will be the default subnet where your nodes are created. When creating a node, you can select other subnets in the same VPC. | | | | | | A node subnet provides dedicated network resources that are logically isolated from other networks for higher security. | | | | | | If no node subnet is available, click **Create Subnet** to create a subnet. After the subnet is created, click the refresh icon. For details about the relationship between VPCs, subnets, and clusters, see :ref:`Cluster Overview `. | | | | | | During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. | | | | | | **The selected subnet cannot be changed after the cluster is created.** | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Pod Subnet | This parameter is available after you select a VPC. | | | | | | The subnet you select is used by pods in the cluster and determines the maximum number of pods in the cluster. The subnet cannot be changed after the cluster is created. | | | | | | IP addresses used by pods will be allocated from this subnet. | | | | | | .. note:: | | | | | | If the pod subnet is the same as the node subnet, pods and nodes share the remaining IP addresses in the subnet. As a result, pods or nodes may fail to be created due to insufficient IP addresses. | +-----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ **Advanced Settings** Configure enhanced capabilities for your CCE Turbo cluster. .. table:: **Table 3** Networking parameters +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Parameter | Description | +===================================+=========================================================================================================================================================================================================================================================================================================+ | Service Network Segment | An IP range from which IP addresses are allocated to Kubernetes Services. After the cluster is created, the CIDR block cannot be changed. The Service CIDR block cannot conflict with the created routes. If they conflict, select another CIDR block. | | | | | | The default value is **10.247.0.0/16**. You can change the CIDR block and mask according to your service requirements. The mask determines the maximum number of Service IP addresses available in the cluster. | | | | | | After you set the mask, the console will provide an estimated maximum number of Services you can create in this CIDR block. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | kube-proxy Mode | Load balancing between Services and their backend pods. The value cannot be changed after the cluster is created. | | | | | | - **IPVS**: optimized kube-proxy mode to achieve higher throughput and faster speed, ideal for large-sized clusters. This mode supports incremental updates and can keep connections uninterrupted during Service updates. | | | | | | In this mode, when the ingress and Service use the same ELB instance, the ingress cannot be accessed from the nodes and containers in the cluster. | | | | | | - **iptables**: Use iptables rules to implement Service load balancing. In this mode, too many iptables rules will be generated when many Services are deployed. In addition, non-incremental updates will cause a latency and even tangible performance issues in the case of service traffic spikes. | | | | | | .. note:: | | | | | | - IPVS provides better scalability and performance for large clusters. | | | - Compared with iptables, IPVS supports more complex load balancing algorithms such as least load first (LLF) and weighted least connections (WLC). | | | - IPVS supports server health check and connection retries. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | CPU Policy | - **On**: Exclusive CPU cores can be allocated to workload pods. Select **On** if your workload is sensitive to latency in CPU cache and scheduling. | | | - **Off**: Exclusive CPU cores will not be allocated to workload pods. Select **Off** if you want a large pool of shareable CPU cores. | +-----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ #. Click **Next: Confirm** to review the configurations and change them if required. #. Click **Submit**. It takes about 6 to 10 minutes to create a cluster. You can click **Back to Cluster List** to perform other operations on the cluster or click **Go to Cluster Events** to view the cluster details. #. If the cluster status is **Available**, the CCE Turbo cluster is successfully created, and **Turbo** is displayed next to the cluster name. Related Operations ------------------ - Using kubectl to connect to the cluster: :ref:`Connecting to a Cluster Using kubectl ` - Logging in to the node: :ref:`Logging In to a Node ` - Creating a namespace: You can create multiple namespaces in a cluster and organize resources in the cluster into different namespaces. These namespaces serve as logical groups and can be managed separately. For details about how to create a namespace for a cluster, see :ref:`Namespaces `. - Creating a workload: Once the cluster is created, you can use an image to create an application that can be accessed from public networks. For details, see :ref:`Creating a Deployment `, :ref:`Creating a StatefulSet `, or :ref:`Creating a DaemonSet `. - Viewing cluster details: Click the cluster name to view cluster details. .. table:: **Table 4** Details about the created cluster +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Tab | Description | +===================================+====================================================================================================================================================================================================================================================+ | Basic Information | You can view the details and running status of the cluster. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Monitoring | You can view the CPU and memory allocation rates of all nodes in the cluster (that is, the maximum allocated amount), as well as the CPU usage, memory usage, and specifications of the master node(s). | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Events | - View cluster events. | | | - Set search criteria, such as the event name or the time segment during which an event is generated, to filter events. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Auto Scaling | You can configure auto scaling to add or reduce worker nodes in a cluster to meet service requirements. For details, see :ref:`Setting Cluster Auto Scaling `. | | | | | | Clusters of v1.17 do not support auto scaling using AOM. You can use node pools for auto scaling. For details, see :ref:`Node Pool Overview `. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | kubectl | To access a Kubernetes cluster from a PC, you need to use the Kubernetes command line tool `kubectl `__. For details, see :ref:`Connecting to a Cluster Using kubectl `. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Resource Tags | Resource tags can be added to classify resources. | | | | | | You can create **predefined tags** in Tag Management Service (TMS). Predefined tags are visible to all service resources that support the tagging function. You can use predefined tags to improve tag creation and resource migration efficiency. | | | | | | CCE will automatically create the "CCE-Dynamic-Provisioning-Node=\ *Node ID*" tag. A maximum of 5 tags can be added. | +-----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+