Update content

This commit is contained in:
OpenTelekomCloud Proposal Bot 2023-03-14 16:00:59 +00:00
parent 68eb822707
commit e2a64a4766
17 changed files with 539 additions and 113 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

View File

@ -9,6 +9,7 @@ Add-ons
- :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>` - :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>`
- :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` - :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>`
- :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` - :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>`
- :ref:`npd <cce_10_0132>`
- :ref:`autoscaler <cce_10_0154>` - :ref:`autoscaler <cce_10_0154>`
- :ref:`metrics-server <cce_10_0205>` - :ref:`metrics-server <cce_10_0205>`
- :ref:`gpu-beta <cce_10_0141>` - :ref:`gpu-beta <cce_10_0141>`
@ -22,6 +23,7 @@ Add-ons
coredns_system_resource_add-on_mandatory coredns_system_resource_add-on_mandatory
storage-driver_system_resource_add-on_discarded storage-driver_system_resource_add-on_discarded
everest_system_resource_add-on_mandatory everest_system_resource_add-on_mandatory
npd
autoscaler autoscaler
metrics-server metrics-server
gpu-beta gpu-beta

384
umn/source/add-ons/npd.rst Normal file

File diff suppressed because it is too large Load Diff

View File

@ -9,20 +9,22 @@ CCE provides multiple types of add-ons to extend cluster functions and meet feat
.. table:: **Table 1** Add-on list .. table:: **Table 1** Add-on list
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Add-on Name | Introduction | | Add-on Name | Introduction |
+=========================================================================+==============================================================================================================================================================================================================================================================================================+ +=========================================================================+=================================================================================================================================================================================================================================================================================================================================+
| :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>` | The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. | | :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>` | The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. | | :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` | Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. | | :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` | Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`npd <cce_10_0132>` | node-problem-detector (npd for short) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. The npd add-on can run as a DaemonSet or a daemon. |
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`autoscaler <cce_10_0154>` | The autoscaler add-on resizes a cluster based on pod scheduling status and resource usage. | | :ref:`autoscaler <cce_10_0154>` | The autoscaler add-on resizes a cluster based on pod scheduling status and resource usage. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`metrics-server <cce_10_0205>` | metrics-server is an aggregator for monitoring data of core cluster resources. | | :ref:`metrics-server <cce_10_0205>` | metrics-server is an aggregator for monitoring data of core cluster resources. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`gpu-beta <cce_10_0141>` | gpu-beta is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. | | :ref:`gpu-beta <cce_10_0141>` | gpu-beta is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| :ref:`volcano <cce_10_0193>` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. | | :ref:`volcano <cce_10_0193>` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. |
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -17,8 +17,6 @@ Notes and Constraints
- HPA policies can be created only for clusters of v1.13 or later. - HPA policies can be created only for clusters of v1.13 or later.
- Only one policy can be created for each workload. You can create an HPA policy.
- For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node. - For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node.
For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached. For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached.
@ -36,25 +34,34 @@ Procedure
.. table:: **Table 1** HPA policy parameters .. table:: **Table 1** HPA policy parameters
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Parameter | Description | | Parameter | Description |
+==============================================================+===========================================================================================================================================================================================================================+ +==============================================================+=========================================================================================================================================================================================================================================================================================================+
| Policy Name | Name of the policy to be created. Set this parameter as required. | | Policy Name | Name of the policy to be created. Set this parameter as required. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Namespace | Namespace to which the workload belongs. | | Namespace | Namespace to which the workload belongs. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Associated Workload | Workload with which the HPA policy is associated. | | Associated Workload | Workload with which the HPA policy is associated. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Pod Range | Minimum and maximum numbers of pods. | | Pod Range | Minimum and maximum numbers of pods. |
| | | | | |
| | When a policy is triggered, the workload pods are scaled within this range. | | | When a policy is triggered, the workload pods are scaled within this range. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cooldown Period | Interval between a scale-in and a scale-out. The unit is minute. **The interval cannot be shorter than 1 minute.** | | Cooldown Period | Interval between a scale-in and a scale-out. The unit is minute. **The interval cannot be shorter than 1 minute.** |
| | | | | |
| | **This parameter is available only for clusters of v1.15 and later. It is not supported in clusters of v1.13 or earlier.** | | | **This parameter is supported only from clusters of v1.15 to v1.23.** |
| | | | | |
| | This parameter indicates the interval between consecutive scaling operations. The cooldown period ensures that a scaling operation is initiated only when the previous one is completed and the system is running stably. | | | This parameter indicates the interval between consecutive scaling operations. The cooldown period ensures that a scaling operation is initiated only when the previous one is completed and the system is running stably. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Scaling Behavior | **This parameter is supported only in clusters of v1.25 or later.** |
| | |
| | - **Default**: Scales workloads using the Kubernetes default behavior. For details, see `Default Behavior <https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#default-behavior>`__. |
| | - **Custom**: Scales workloads using custom policies such as stabilization window, steps, and priorities. Unspecified parameters use the values recommended by Kubernetes. |
| | |
| | - **Disable scale-out/scale-in**: Select whether to disable scale-out or scale-in. |
| | - **Stabilization Window**: A period during which CCE continuously checks whether the metrics used for scaling keep fluctuating. CCE triggers scaling if the desired state is not maintained for the entire window. This window restricts the unwanted flapping of pod count due to metric changes. |
| | - **Step**: specifies the scaling step. You can set the number or percentage of pods to be scaled in or out within a specified period. If there are multiple policies, you can select the policy that maximizes or minimizes the number of pods. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| System Policy | - **Metric**: You can select **CPU usage** or **Memory usage**. | | System Policy | - **Metric**: You can select **CPU usage** or **Memory usage**. |
| | | | | |
| | .. note:: | | | .. note:: |
@ -72,7 +79,7 @@ Procedure
| | - **Tolerance Range**: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range. | | | - **Tolerance Range**: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range. |
| | | | | |
| | If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling is triggered. **This parameter is supported only in clusters of v1.15 or later.** | | | If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling is triggered. **This parameter is supported only in clusters of v1.15 or later.** |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Custom Policy (supported only in clusters of v1.15 or later) | .. note:: | | Custom Policy (supported only in clusters of v1.15 or later) | .. note:: |
| | | | | |
| | Before setting a custom policy, you need to install an add-on that supports custom metric collection in the cluster, for example, prometheus add-on. | | | Before setting a custom policy, you need to install an add-on that supports custom metric collection in the cluster, for example, prometheus add-on. |
@ -90,6 +97,6 @@ Procedure
| | When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes. | | | When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes. |
| | | | | |
| | - **Tolerance Range**: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range. | | | - **Tolerance Range**: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range. |
+--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +--------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#. Click **Create**. #. Click **Create**.

View File

@ -137,7 +137,7 @@ Creating a Node Pool and a Node Scaling Policy
- **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool. - **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool.
- **Specifications**: 2 vCPUs \| 4 GiB - **Specifications**: 2 vCPUs \| 4 GiB
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0012.html>`__. Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0012.html>`__.
#. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%. #. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%.
@ -147,7 +147,7 @@ Creating a Node Pool and a Node Scaling Policy
#. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically. #. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically.
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0209.html>`__. As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0209.html>`__.
|image3| |image3|
@ -372,7 +372,7 @@ Observing the Auto Scaling Process
You can also view the HPA policy execution history on the console. Wait until the one node is reduced. You can also view the HPA policy execution history on the console. Wait until the one node is reduced.
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details about node scale-in, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0296.html>`__. The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0296.html>`__.
Summary Summary
------- -------
@ -380,6 +380,6 @@ Summary
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed. Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
.. |image1| image:: /_static/images/en-us_image_0000001360670117.png .. |image1| image:: /_static/images/en-us_image_0000001360670117.png
.. |image2| image:: /_static/images/en-us_image_0000001274543860.png .. |image2| image:: /_static/images/en-us_image_0000001533181077.png
.. |image3| image:: /_static/images/en-us_image_0000001274544060.png .. |image3| image:: /_static/images/en-us_image_0000001482541956.png
.. |image4| image:: /_static/images/en-us_image_0000001274864616.png .. |image4| image:: /_static/images/en-us_image_0000001482701968.png

View File

@ -137,7 +137,7 @@ Creating a Node Pool and a Node Scaling Policy
- **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool. - **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool.
- **Specifications**: 2 vCPUs \| 4 GiB - **Specifications**: 2 vCPUs \| 4 GiB
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0012.html>`__. Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0012.html>`__.
#. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%. #. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%.
@ -147,7 +147,7 @@ Creating a Node Pool and a Node Scaling Policy
#. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically. #. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically.
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0209.html>`__. As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0209.html>`__.
|image3| |image3|
@ -372,7 +372,7 @@ Observing the Auto Scaling Process
You can also view the HPA policy execution history on the console. Wait until the one node is reduced. You can also view the HPA policy execution history on the console. Wait until the one node is reduced.
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details about node scale-in, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0296.html>`__. The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0296.html>`__.
Summary Summary
------- -------
@ -380,6 +380,6 @@ Summary
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed. Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
.. |image1| image:: /_static/images/en-us_image_0000001360670117.png .. |image1| image:: /_static/images/en-us_image_0000001360670117.png
.. |image2| image:: /_static/images/en-us_image_0000001274543860.png .. |image2| image:: /_static/images/en-us_image_0000001533181077.png
.. |image3| image:: /_static/images/en-us_image_0000001274544060.png .. |image3| image:: /_static/images/en-us_image_0000001482541956.png
.. |image4| image:: /_static/images/en-us_image_0000001274864616.png .. |image4| image:: /_static/images/en-us_image_0000001482701968.png

View File

@ -27,31 +27,10 @@ When creating a cluster of v1.23 or later, you can enable overload control durin
#. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later. #. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later.
#. On the cluster information page, view the master node information. If overload control is not enabled, a message is displayed. You can click **Start Now** to enable the function. #. On the cluster information page, view the master node information. If overload control is not enabled, a message is displayed. You can click **Start Now** to enable the function.
Overload Monitoring Disabling Cluster Overload Control
------------------- ----------------------------------
**Method 1: Using the CCE console**
#. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later. #. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later.
#. On the **Cluster Information** page, click **Manage** in the upper right corner.
#. On the cluster information page, view the master node information. The overload level metric is displayed. #. Set **support-overload** to **false** under **kube-apiserver**.
#. Click **OK**.
The overload levels are as follows:
- Circuit breaking: Rejects all external traffic.
- Severe overload: Rejects 75% external traffic.
- Moderate overload: Rejects 50% external traffic.
- Slight overload: Rejects 25% external traffic.
- Normal: Does not reject external traffic.
**Method 2: Using the AOM concole**
You can log in to the AOM console, create a dashboard, and add the metric named **vein_overload_level**.
The meanings of the monitoring metrics are as follows:
- 0: Circuit breaking: Rejects all external traffic.
- 1: Severe overload: Rejects 75% external traffic.
- 2: Moderate overload: Rejects 50% external traffic.
- 3: Slight overload: Rejects 25% external traffic.
- 4: Normal: Does not reject external traffic.

View File

@ -682,6 +682,8 @@ SNI allows multiple TLS-based access domain names to be provided for external sy
You can enable SNI when the preceding conditions are met. The following uses the automatic creation of a load balancer as an example. In this example, **sni-test-secret-1** and **sni-test-secret-2** are SNI certificates. The domain names specified by the certificates must be the same as those in the certificates. You can enable SNI when the preceding conditions are met. The following uses the automatic creation of a load balancer as an example. In this example, **sni-test-secret-1** and **sni-test-secret-2** are SNI certificates. The domain names specified by the certificates must be the same as those in the certificates.
**For clusters of v1.21 or earlier:**
.. code-block:: .. code-block::
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1beta1
@ -722,6 +724,51 @@ You can enable SNI when the preceding conditions are met. The following uses the
property: property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
**For clusters of v1.23 or later:**
.. code-block::
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-test
annotations:
kubernetes.io/elb.class: union
kubernetes.io/elb.port: '443'
kubernetes.io/elb.autocreate:
'{
"type":"public",
"bandwidth_name":"cce-bandwidth-******",
"bandwidth_chargemode":"bandwidth",
"bandwidth_size":5,
"bandwidth_sharetype":"PER",
"eip_type":"5_bgp"
}'
kubernetes.io/elb.tls-ciphers-policy: tls-1-2
spec:
tls:
- secretName: ingress-test-secret
- hosts:
- example.top # Domain name specified a certificate is issued
secretName: sni-test-secret-1
- hosts:
- example.com # Domain name specified a certificate is issued
secretName: sni-test-secret-2
rules:
- host: ''
http:
paths:
- path: '/'
backend:
service:
name: <your_service_name> # Replace it with the name of your target Service.
port:
number: 8080 # Replace 8080 with the port number of your target Service.
property:
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
pathType: ImplementationSpecific
ingressClassName: cce
Accessing Multiple Services Accessing Multiple Services
--------------------------- ---------------------------

View File

@ -14,6 +14,7 @@ Prerequisites
Notes and Constraints Notes and Constraints
--------------------- ---------------------
- The node has 2-core or higher CPU, 4 GB or larger memory.
- To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node <cce_10_0178>`. - To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node <cce_10_0178>`.
- The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add and delete NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected. For example, the NIC named **gw_11cbf51a@eth0** on the node is the container network gateway and cannot be modified. - The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add and delete NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected. For example, the NIC named **gw_11cbf51a@eth0** on the node is the container network gateway and cannot be modified.
- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name. - During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.

View File

@ -66,6 +66,10 @@ The system policies preset for CCE in IAM are **CCEFullAccess** and **CCEReadOnl
- **CCE FullAccess**: common operation permissions on CCE cluster resources, excluding the namespace-level permissions for the clusters (with Kubernetes RBAC enabled) and the privileged administrator operations, such as agency configuration and cluster certificate generation - **CCE FullAccess**: common operation permissions on CCE cluster resources, excluding the namespace-level permissions for the clusters (with Kubernetes RBAC enabled) and the privileged administrator operations, such as agency configuration and cluster certificate generation
- **CCE ReadOnlyAccess**: permissions to view CCE cluster resources, excluding the namespace-level permissions of the clusters (with Kubernetes RBAC enabled) - **CCE ReadOnlyAccess**: permissions to view CCE cluster resources, excluding the namespace-level permissions of the clusters (with Kubernetes RBAC enabled)
.. note::
The **CCE Admin** and **CCE Viewer** roles will be discarded soon. You are advised to use **CCE FullAccess** and **CCE ReadOnlyAccess**.
Custom Policies Custom Policies
--------------- ---------------

View File

@ -147,7 +147,7 @@ Procedure
| volumeName | Name of the PV. | | volumeName | Name of the PV. |
+-----------------------------------------------+---------------------------------------------------------------------------------------------+ +-----------------------------------------------+---------------------------------------------------------------------------------------------+
**1.11 <= K8s version < 1.11.7** **Clusters from v1.11 to v1.11.7**
- .. _cce_10_0313__li19211184720504: - .. _cce_10_0313__li19211184720504: