Update content
This commit is contained in:
parent
68eb822707
commit
e2a64a4766
Binary file not shown.
Before Width: | Height: | Size: 130 KiB |
Binary file not shown.
Before Width: | Height: | Size: 104 KiB |
Binary file not shown.
Before Width: | Height: | Size: 83 KiB |
BIN
umn/source/_static/images/en-us_image_0000001482541956.png
Normal file
BIN
umn/source/_static/images/en-us_image_0000001482541956.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 69 KiB |
BIN
umn/source/_static/images/en-us_image_0000001482701968.png
Normal file
BIN
umn/source/_static/images/en-us_image_0000001482701968.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 41 KiB |
BIN
umn/source/_static/images/en-us_image_0000001533181077.png
Normal file
BIN
umn/source/_static/images/en-us_image_0000001533181077.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 91 KiB |
@ -9,6 +9,7 @@ Add-ons
|
||||
- :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>`
|
||||
- :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>`
|
||||
- :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>`
|
||||
- :ref:`npd <cce_10_0132>`
|
||||
- :ref:`autoscaler <cce_10_0154>`
|
||||
- :ref:`metrics-server <cce_10_0205>`
|
||||
- :ref:`gpu-beta <cce_10_0141>`
|
||||
@ -22,6 +23,7 @@ Add-ons
|
||||
coredns_system_resource_add-on_mandatory
|
||||
storage-driver_system_resource_add-on_discarded
|
||||
everest_system_resource_add-on_mandatory
|
||||
npd
|
||||
autoscaler
|
||||
metrics-server
|
||||
gpu-beta
|
||||
|
384
umn/source/add-ons/npd.rst
Normal file
384
umn/source/add-ons/npd.rst
Normal file
File diff suppressed because it is too large
Load Diff
@ -9,20 +9,22 @@ CCE provides multiple types of add-ons to extend cluster functions and meet feat
|
||||
|
||||
.. table:: **Table 1** Add-on list
|
||||
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Add-on Name | Introduction |
|
||||
+=========================================================================+==============================================================================================================================================================================================================================================================================================+
|
||||
| :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>` | The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` | Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`autoscaler <cce_10_0154>` | The autoscaler add-on resizes a cluster based on pod scheduling status and resource usage. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`metrics-server <cce_10_0205>` | metrics-server is an aggregator for monitoring data of core cluster resources. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`gpu-beta <cce_10_0141>` | gpu-beta is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`volcano <cce_10_0193>` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. |
|
||||
+-------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Add-on Name | Introduction |
|
||||
+=========================================================================+=================================================================================================================================================================================================================================================================================================================================+
|
||||
| :ref:`coredns (System Resource Add-On, Mandatory) <cce_10_0129>` | The coredns add-on is a DNS server that provides domain name resolution services for Kubernetes clusters. coredns chains plug-ins to provide additional features. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`storage-driver (System Resource Add-On, Discarded) <cce_10_0127>` | storage-driver is a FlexVolume driver used to support IaaS storage services such as EVS, SFS, and OBS. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`everest (System Resource Add-On, Mandatory) <cce_10_0066>` | Everest is a cloud native container storage system. Based on the Container Storage Interface (CSI), clusters of Kubernetes v1.15.6 or later obtain access to cloud storage services. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`npd <cce_10_0132>` | node-problem-detector (npd for short) is an add-on that monitors abnormal events of cluster nodes and connects to a third-party monitoring platform. It is a daemon running on each node. It collects node issues from different daemons and reports them to the API server. The npd add-on can run as a DaemonSet or a daemon. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`autoscaler <cce_10_0154>` | The autoscaler add-on resizes a cluster based on pod scheduling status and resource usage. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`metrics-server <cce_10_0205>` | metrics-server is an aggregator for monitoring data of core cluster resources. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`gpu-beta <cce_10_0141>` | gpu-beta is a device management add-on that supports GPUs in containers. It supports only NVIDIA drivers. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :ref:`volcano <cce_10_0193>` | Volcano provides general-purpose, high-performance computing capabilities, such as job scheduling, heterogeneous chip management, and job running management, serving end users through computing frameworks for different industries, such as AI, big data, gene sequencing, and rendering. |
|
||||
+-------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -137,7 +137,7 @@ Creating a Node Pool and a Node Scaling Policy
|
||||
- **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool.
|
||||
- **Specifications**: 2 vCPUs \| 4 GiB
|
||||
|
||||
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0012.html>`__.
|
||||
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0012.html>`__.
|
||||
|
||||
#. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%.
|
||||
|
||||
@ -147,7 +147,7 @@ Creating a Node Pool and a Node Scaling Policy
|
||||
|
||||
#. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically.
|
||||
|
||||
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0209.html>`__.
|
||||
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0209.html>`__.
|
||||
|
||||
|image3|
|
||||
|
||||
@ -372,7 +372,7 @@ Observing the Auto Scaling Process
|
||||
|
||||
You can also view the HPA policy execution history on the console. Wait until the one node is reduced.
|
||||
|
||||
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details about node scale-in, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0296.html>`__.
|
||||
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0296.html>`__.
|
||||
|
||||
Summary
|
||||
-------
|
||||
@ -380,6 +380,6 @@ Summary
|
||||
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
|
||||
|
||||
.. |image1| image:: /_static/images/en-us_image_0000001360670117.png
|
||||
.. |image2| image:: /_static/images/en-us_image_0000001274543860.png
|
||||
.. |image3| image:: /_static/images/en-us_image_0000001274544060.png
|
||||
.. |image4| image:: /_static/images/en-us_image_0000001274864616.png
|
||||
.. |image2| image:: /_static/images/en-us_image_0000001533181077.png
|
||||
.. |image3| image:: /_static/images/en-us_image_0000001482541956.png
|
||||
.. |image4| image:: /_static/images/en-us_image_0000001482701968.png
|
||||
|
@ -137,7 +137,7 @@ Creating a Node Pool and a Node Scaling Policy
|
||||
- **Max. Nodes**: Set it to **5**, indicating the maximum number of nodes in a node pool.
|
||||
- **Specifications**: 2 vCPUs \| 4 GiB
|
||||
|
||||
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0012.html>`__.
|
||||
Retain the defaults for other parameters. For details, see `Creating a Node Pool <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0012.html>`__.
|
||||
|
||||
#. Click **Add-ons** on the left of the cluster console, click **Edit** under the autoscaler add-on, modify the add-on configuration, enable **Auto node scale-in**, and configure scale-in parameters. For example, trigger scale-in when the node resource utilization is less than 50%.
|
||||
|
||||
@ -147,7 +147,7 @@ Creating a Node Pool and a Node Scaling Policy
|
||||
|
||||
#. Click **Node Scaling** on the left of the cluster console and click **Create Node Scaling Policy** in the upper right corner. Node scaling policies added here trigger scale-out based on the CPU/memory allocation rate or periodically.
|
||||
|
||||
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0209.html>`__.
|
||||
As shown in the following figure, when the cluster CPU allocation rate is greater than 70%, one node will be added. A node scaling policy needs to be associated with a node pool. Multiple node pools can be associated. When you need to scale nodes, node with proper specifications will be added or reduced from the node pool based on the minimum waste principle. For details, see `Creating a Node Scaling Policy <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0209.html>`__.
|
||||
|
||||
|image3|
|
||||
|
||||
@ -372,7 +372,7 @@ Observing the Auto Scaling Process
|
||||
|
||||
You can also view the HPA policy execution history on the console. Wait until the one node is reduced.
|
||||
|
||||
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details about node scale-in, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_0296.html>`__.
|
||||
The reason why the other two nodes in the node pool are not reduced is that they both have pods in the kube-system namespace (and these pods are not created by DaemonSets). For details, see `Node Scaling Mechanisms <https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_10_0296.html>`__.
|
||||
|
||||
Summary
|
||||
-------
|
||||
@ -380,6 +380,6 @@ Summary
|
||||
Using HPA and CA can easily implement auto scaling in most scenarios. In addition, the scaling process of nodes and pods can be easily observed.
|
||||
|
||||
.. |image1| image:: /_static/images/en-us_image_0000001360670117.png
|
||||
.. |image2| image:: /_static/images/en-us_image_0000001274543860.png
|
||||
.. |image3| image:: /_static/images/en-us_image_0000001274544060.png
|
||||
.. |image4| image:: /_static/images/en-us_image_0000001274864616.png
|
||||
.. |image2| image:: /_static/images/en-us_image_0000001533181077.png
|
||||
.. |image3| image:: /_static/images/en-us_image_0000001482541956.png
|
||||
.. |image4| image:: /_static/images/en-us_image_0000001482701968.png
|
||||
|
@ -27,31 +27,10 @@ When creating a cluster of v1.23 or later, you can enable overload control durin
|
||||
#. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later.
|
||||
#. On the cluster information page, view the master node information. If overload control is not enabled, a message is displayed. You can click **Start Now** to enable the function.
|
||||
|
||||
Overload Monitoring
|
||||
-------------------
|
||||
|
||||
**Method 1: Using the CCE console**
|
||||
Disabling Cluster Overload Control
|
||||
----------------------------------
|
||||
|
||||
#. Log in to the CCE console and go to an existing cluster whose version is v1.23 or later.
|
||||
|
||||
#. On the cluster information page, view the master node information. The overload level metric is displayed.
|
||||
|
||||
The overload levels are as follows:
|
||||
|
||||
- Circuit breaking: Rejects all external traffic.
|
||||
- Severe overload: Rejects 75% external traffic.
|
||||
- Moderate overload: Rejects 50% external traffic.
|
||||
- Slight overload: Rejects 25% external traffic.
|
||||
- Normal: Does not reject external traffic.
|
||||
|
||||
**Method 2: Using the AOM concole**
|
||||
|
||||
You can log in to the AOM console, create a dashboard, and add the metric named **vein_overload_level**.
|
||||
|
||||
The meanings of the monitoring metrics are as follows:
|
||||
|
||||
- 0: Circuit breaking: Rejects all external traffic.
|
||||
- 1: Severe overload: Rejects 75% external traffic.
|
||||
- 2: Moderate overload: Rejects 50% external traffic.
|
||||
- 3: Slight overload: Rejects 25% external traffic.
|
||||
- 4: Normal: Does not reject external traffic.
|
||||
#. On the **Cluster Information** page, click **Manage** in the upper right corner.
|
||||
#. Set **support-overload** to **false** under **kube-apiserver**.
|
||||
#. Click **OK**.
|
||||
|
@ -682,6 +682,8 @@ SNI allows multiple TLS-based access domain names to be provided for external sy
|
||||
|
||||
You can enable SNI when the preceding conditions are met. The following uses the automatic creation of a load balancer as an example. In this example, **sni-test-secret-1** and **sni-test-secret-2** are SNI certificates. The domain names specified by the certificates must be the same as those in the certificates.
|
||||
|
||||
**For clusters of v1.21 or earlier:**
|
||||
|
||||
.. code-block::
|
||||
|
||||
apiVersion: networking.k8s.io/v1beta1
|
||||
@ -722,6 +724,51 @@ You can enable SNI when the preceding conditions are met. The following uses the
|
||||
property:
|
||||
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
|
||||
|
||||
**For clusters of v1.23 or later:**
|
||||
|
||||
.. code-block::
|
||||
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress-test
|
||||
annotations:
|
||||
kubernetes.io/elb.class: union
|
||||
kubernetes.io/elb.port: '443'
|
||||
kubernetes.io/elb.autocreate:
|
||||
'{
|
||||
"type":"public",
|
||||
"bandwidth_name":"cce-bandwidth-******",
|
||||
"bandwidth_chargemode":"bandwidth",
|
||||
"bandwidth_size":5,
|
||||
"bandwidth_sharetype":"PER",
|
||||
"eip_type":"5_bgp"
|
||||
}'
|
||||
kubernetes.io/elb.tls-ciphers-policy: tls-1-2
|
||||
spec:
|
||||
tls:
|
||||
- secretName: ingress-test-secret
|
||||
- hosts:
|
||||
- example.top # Domain name specified a certificate is issued
|
||||
secretName: sni-test-secret-1
|
||||
- hosts:
|
||||
- example.com # Domain name specified a certificate is issued
|
||||
secretName: sni-test-secret-2
|
||||
rules:
|
||||
- host: ''
|
||||
http:
|
||||
paths:
|
||||
- path: '/'
|
||||
backend:
|
||||
service:
|
||||
name: <your_service_name> # Replace it with the name of your target Service.
|
||||
port:
|
||||
number: 8080 # Replace 8080 with the port number of your target Service.
|
||||
property:
|
||||
ingress.beta.kubernetes.io/url-match-mode: STARTS_WITH
|
||||
pathType: ImplementationSpecific
|
||||
ingressClassName: cce
|
||||
|
||||
Accessing Multiple Services
|
||||
---------------------------
|
||||
|
||||
|
@ -14,6 +14,7 @@ Prerequisites
|
||||
Notes and Constraints
|
||||
---------------------
|
||||
|
||||
- The node has 2-core or higher CPU, 4 GB or larger memory.
|
||||
- To ensure node stability, a certain amount of CCE node resources will be reserved for Kubernetes components (such as kubelet, kube-proxy, and docker) based on the node specifications. Therefore, the total number of node resources and assignable node resources in Kubernetes are different. The larger the node specifications, the more the containers deployed on the node. Therefore, more node resources need to be reserved to run Kubernetes components. For details, see :ref:`Formula for Calculating the Reserved Resources of a Node <cce_10_0178>`.
|
||||
- The node networking (such as the VM networking and container networking) is taken over by CCE. You are not allowed to add and delete NICs or change routes. If you modify the networking configuration, the availability of CCE may be affected. For example, the NIC named **gw_11cbf51a@eth0** on the node is the container network gateway and cannot be modified.
|
||||
- During the node creation, software packages are downloaded from OBS using the domain name. You need to use a private DNS server to resolve the OBS domain name, and configure the subnet where the node resides with a private DNS server address. When you create a subnet, the private DNS server is used by default. If you change the subnet DNS, ensure that the DNS server in use can resolve the OBS domain name.
|
||||
|
@ -66,6 +66,10 @@ The system policies preset for CCE in IAM are **CCEFullAccess** and **CCEReadOnl
|
||||
- **CCE FullAccess**: common operation permissions on CCE cluster resources, excluding the namespace-level permissions for the clusters (with Kubernetes RBAC enabled) and the privileged administrator operations, such as agency configuration and cluster certificate generation
|
||||
- **CCE ReadOnlyAccess**: permissions to view CCE cluster resources, excluding the namespace-level permissions of the clusters (with Kubernetes RBAC enabled)
|
||||
|
||||
.. note::
|
||||
|
||||
The **CCE Admin** and **CCE Viewer** roles will be discarded soon. You are advised to use **CCE FullAccess** and **CCE ReadOnlyAccess**.
|
||||
|
||||
Custom Policies
|
||||
---------------
|
||||
|
||||
|
@ -147,7 +147,7 @@ Procedure
|
||||
| volumeName | Name of the PV. |
|
||||
+-----------------------------------------------+---------------------------------------------------------------------------------------------+
|
||||
|
||||
**1.11 <= K8s version < 1.11.7**
|
||||
**Clusters from v1.11 to v1.11.7**
|
||||
|
||||
- .. _cce_10_0313__li19211184720504:
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user